var/home/core/zuul-output/0000755000175000017500000000000015133642371014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015133644302015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000120153515133644244020263 0ustar corecoreHoikubelet.log][s8~_˾4mU'&Iv`0& eHrnuvzjX}\qA q'nz&n:ɚl"ewdҦbfMZ&? ?7v%oZ׬=ׂ.&btUK*ɒ5c$gu2ڲ?\Nj\cɊId9_L漀k9g{@?e?\'3&˶8E&Z*oK^qɊN? _8|P&sLJѰ %m>: ʭQjɻF߷Gdl3gI|d^b/ OG^jV &t2ȯX#"EsY݂WNΛϦ4lcv mqj; Z缤 zz~x8п͵;E_XIZnɼVF<4cϟp!G>M~4z*Ѫ1dxIU0<fy=Ĵb= jٴWtxMhtsFc3'ɧIW 0|b>q<98;7W ɴ(0R/ٟџD9{[LE{xo&<ڨEלWٙ?É|I+X b9˹:j%H75fK>o"kx{R*mG*ŭˮh97(A7K-AНN/``Ո oh.ϖ,yXk8_H`5p$ezO__8dچE^/N ~M鴠:0Q4"!$_,[:g%紥(vEfQKB,^:jqTBW rH/c &`7ӆ z$>JwTx{q~,U6l^-p}hh"^Dyu6=MMp<&҆/ ?j8Lm#`th^`A(*x}E`",T[X!ؕ{(2g8 ѢNmZFrIPV߰ۗʏau. =ltmjݝ+Dvo| ΔBe] ?`|`g+lm=/^l^9O1dK!nrvH옱f3vvXbh޼_"0~z{|p<~`'5:8|F)b+m:v8hJľV0*!/:pY 'b8 R4kw  zb]Wg8R>J7F1 B"#) _UɳAJNjgrK`ufB0ϦLw827<_,RaOkua?TxRAwkih˥2pT&wC%8B!ηV,D8׿8PTEު~ 7 qBtah9 &g6 ,a|aGv'ki>T7/\ڇⅈE9WSD %;uoqA0ܧzW@RXB1ғB;tz1Vx9`3G!J7|.‚mm}Hek#jjip|ʧXw{%9F2vց8"VBYHВL%Y%p ]eCst)*RIsVOb%An>"|xu81Y&]GW-GajXYƟ$O^:6Mp|^-[ޣ ޣFL~gJO[6Srx>SS,X^z aȟJѤob1,gئQ8V_z%PӘ̻G&&7JHUCb#H;g)µh1Џ)-yb" Xm9zx&%?fU6G`k;7meP'oǹD&;60 '8(J:-g!-gYA>V:x&4PԝFb`ՃF3`s 5D>LWG $jr1؎F|A/uHihT3?ۇBJv%TS;OX J8}vx+ 0#^>.Rd7HBᎄr 7/lݕSShNn Y#\ÂŦ_wN6gUbn>:kBb%m.jg[`J9_}-m[06,,%;QuXr`Ti< \&`֐9v{ϩK lk_Q%)jIe+\_wXI ZiXd( ޣiW'H|2'q1$vٿG ]ч U- qk  $_홊MGm±lһ}18Ƞ ;:.0R Q`VϨ*m)E%;d')k\)GTq&ob<+; [Z;jlIL 3a581[e(;9}Xtja/ q g):nSvWkhE P*5% 0J.N/3lR:sͫ!XG<xbR*I[$Hp&nɊFMLRVMHjE7le7mbrd yZ}~c-8x˓ 9KX:Rt$!8pqJm]pe˰bD:%+ؤ&׊ Ϭǡ5e&&p'ѵpÇ!,V\ws$tG^6$w@X5>1E $3q͘(їQaKAT3vQ"5x{p)aCԍnlmS3-u$< yzv^tr  AU\I9!pr"%ģn|z+mrZl6֟94#kp*HDv-Tϭ"\~x̯;)ZE:]`DV6IC(iLNX)f:O Ic}ה 3cUH +硠G}J{sɇ8 ;8 Xmm~@0S`C2f컦: Ѷ\DH0vcp  Ft(RЋCDrMJ?4`ۖJN]Њ򽏕TeuZzG?x~,Y{?i]CO$]esڲmîEґ h_ 4Ed f R>I$8cF;{n#-oL?&qwaƕ v=H0Яݧut$z4\lJGD :LYLۥCHȄ*MOU.fE{n &UʘԈS7wU,ײPUʆL>ֆ#b qh3Z vg*+,DY +nR`L9-l(ɤ}-e`{6s_0ȯO! D `qrAljDHѓT[PA/ЎI6CqQO;|{&qU$D`[z(؃H .v=j[MUGG4I}p T:=L-ay{47+|@5,}7>qg20C¾rV}0&fDoؽQ 'T+jQŽ2SU\}OPG$S^D0,ͪ$2=M)#;gJߒԺ`~|<ݱ ͔5pjCŧ_KmpU@%KPGL絘tD~BFpE(Јs7(."H:x4W8 䏡 .0|Gdـ_4u&~5gR <ҿ͍~4 c?ZZht,Mr w5dmTBt=x9]TFh>i; @%ʂ!D܃ aA?}"nݾȄc?ϔ(q4F34>јa; Akn6(!fL-"QB[gnB?#Eтv, ۋ#3>!JF' Y&C4OyUZ#7Iq?5R[0_^L՛_RQBCYl3 ki֎8"B4A5vsww&zaS\!OZ38` ?ं3|gL{RQ@&ZӋܳḌB!R[ %.Z4Q _Yя]OVJXw<lupsta="gWuLV8*4YeNN@tԷR?hɳGK?q^4tĐř`LOj?iڏ={.2f_^ߟ_.K-E?91!HOh2X@䃧8AXxY(v3Dbb߷7#S H9ҭG__VsGH&GKZ>蚌=d?K7/_ S;ճ޽:Pl0N^wo+2p^bc]ni=|vsw5m_!]P)C7iwݶ}nȒhk[>NW3$eӎCQ[4>c8f$ֻxWw^\N;I5wR2㦷G=?,2K6Mk1oSKJ7E@eD9Z:W658K8֨98䇷?:=9}{tzwϏ~+[`ܣo=d 'Y 0x`PEډ.r L4اjcЈ8xs*> jD )wDDAYS[u ш1$b}Fc}<9Kn ^W)PxDD{Lߛ?D?y'@afM?> 'sǩYLܻdm"G3 O hУ@G)<y=I4P^"Td'`G8`#JУ$d?}|a:ثpBW_16ϑs"-mg~@8.xǔhcӨWn{aX6ޏFm86];mB;^$qwaqH@- ɼ-2i=G?,Ǭgޛ2O[] #S*% (=!Jp'=na>R/VP, ܎}‡X\~| nAU^D- Ш "xd"Q G%>BN $ #^("8l"2k5VNfޟ`I .]$`ɒlvQ #_OcF- ( {0OOxO?FI XxO 3^-e P=1@͙G \{__n+bFՇSDN90HBlH5}l*0>^ТMy% ,"~z 5!E]:jD)>d eJl56UZr:ߢ?5Rf% =1P8 }O #9d\]\aL x @`pBI (FP jT. 1m`Yf|(\RA b\8#2K˚jP$_::klwU0j9 RtjrsmPĪvzEC(3sKrNQ竑`͠dTm 9l y('"O5_o\-Ych0e_ykUw']<<x>&~ f 5&gǠ3Zp3"oM,7 6٣JI:擫r~Ie.~GȕGjIA`iQ!ro%ڦ yA+{MA%qVpKC $Aް(}hœ_'8ѐA,WAҡYX:Sȥ˨KKƙǘ fciEieqE>؈|-z&:ZW/' IڋUP2)1(T-&QCh8\Bev.Lbb¹UJЖ]ÅMa@زZ(4+6 hp4[,?N^Amg,}y'-3yWQ|Vxlv ly⤺.E1qr$%&}Ɵk<plKރWw5o-a)SY 33a#r&U *$("ifYA0=wDE(c0dM'#?x(l4 ܔG;d,|Y* 9.)Kz|Zl#/|t澂jd$h11mt#@ӴUƸ`UM3f-xg%˅ 5YQ`"|oDQ,`]EճS˵τ_UGDH[U ;+ZSzw  XW5![a@o @8T!زcXr4$* [y!Fuz_3Xۖ@gU/(Q CUeE(Dď٬(rTEUV*ew,IE"FI'4NH]{7X KwZx5ꐃ6K|zXe b4L 09td66@o%}u/5)$9K#X).r\`omKAuAv{ޞzѥ.}ѩ{۷[ucw}}wһl]ZlwYDe{Jv_APwA ) z\M <gq1Jcc]pZ\ղE]w뺵NC7"&AdDcryY\}ry|Af2oh [l3U&d^2T(]rl kl|p-Tӊs^qИt\ybzQL' tg\ i#Ey5-#,XN]ܩg$ZoEA%m\vmPNgeMYc fVT,;qic/`j2` A߃9QhȌ h"3Α q0DmmeɼBS*5, Wj)})Bj0,omӬڧ ^l‰"r+.KY&܎WǠt]y4C+ҎlYj> Iûߍq)> )Az<0n7X8t4WZ/-Q*bRofš6 nКK5  3cm+J%tp1 m;.AbZ_y0fFn#SF_&uiަbnkJgekϡե[v7A2`࢜Hѡ촒+ jc, ? }!ByFgӆQw5Ё< ?Ka"B׆.LNA&Y :?FQ2qـFiT\"ZNAr^Qa".轿OzOk`h߸!sbV!2B]VܕV $f[C;0a)j>OdMiɫuZf*0]ZeVXBeEEJOҙpbPWaθ$`QrA&OlYD;WuId6e67Xžk8:{%OJg!láZ;63BQg$¿n,6So2\8b2_.o6ަzad2"ly}؈)>9nQqo|R/waFUqRqd'Qm/2ُ~ȱB`|TD-`_n]Nw%9_ΕhnbǾj ic ےmX:MYeXon{WUɱzRZ6D&YL-n n!Î4r5R<ʅ/=R۵فߑ#hK=ȄҺ, 7'1A&*XjCg1-?%]%U._УZݕ1955u>*_ھH6! 9ӂoI7gb\U\Y`ek6c2BplÐB/KBOѪ שlՊZiB?bU({GU%h Bh1IoԔ"hqQ`rG+R聬r2dRo%o&yP뺱-(\Zr?_jBɻݶ$+zVr_àզE*l7'o ;0toYdS@$8uЄvO_WPrA[K%9ЯoW]ptcdpo4p('pJbt1~,ALpEJfqF0o\1R… LмE3oqdz(ؼ$H- ۊLJ\sxI b ,VH]˞ZP豙wTRUuBdt@Tb!Y[2B奈~y#&~é\ìDI7F"wZjpB[j fV wx҅좥G)QJr{ުQ NѠ׍m4Oxx2ShX"H:wOqGg*F9ER_;>nE7j6V7 )tRlhc+ KMJJ-)be-U՞NwT~z}s^wWU?gR~ztW4څ)AY}Ьt,1BϳRy49Jt):zs!1hPAFpEaHm TdLݚc^`Z69\JFE<00I ZS$i?a>mƌ|emZ*Ckl.$91{H4&T6vYd>ՠ.Qk!hF,CxHCc1`#5N P1pP#*B$(Mv vfvF@g-mRZ(n\:Y+U%f7kN;ʙt 6YtEV ^.e]:0)a[koa/ R01 w n94y-88uPj?]eRj'>tqpBK*n[\vh|lSVf3eL"'5qF}`?p0 )!?Paw82tpҐI!YGK`LIZо* l+x!`,Bt(mЏұёAjѾJ' z:R- Uُzha׻`UNT5Rc邥uWUi:AR `Լɿ!%X烛Zk!GHuC[<$^~^s5I[*?5CqbA'J H—[ce Űn)y +r ZuI"=A:ֻ\kqWV}ݳ~aaF3EI)} ,5qѥbB쉩k}\7\/$gђ,$UW:EPRj]U( m^G{])w AY+Lk*^E <VQ1Ui#w%:rGOkjyM`b"s4]p܄4ls*_~be[EQe,OVN'<6OL/X̒(QQ&`Tso:p,pL&2Pώi)4woK7ӃYh}'.] !fypIEFP$uy6wژ?D^@YܕMD;qU-3f`JKǘ3 żmbz{ E[]6 yP"P[!+R\ŏȯyJ3@si߉0O7Զۧ].a%j{q0*$c!3[lIUVrZ8TdUQ.œqr=%rzkѓ7˕S;z}3n ǠcdOf4QBkIg8{޹G?8իDq+]CHXX.2b"|Ի.8 kDySdٕ$+u 6o5 hB u>ϴ"lO-}Qv:iOϳ3YmNxU!ElVS!tPBZ¨()*DH4Bo"'vqpQiX8uzncܮvjN7ӧmqU 9Ƣ"Ĉ.aTe##%j<#< cG'F,L lEPM4·!UGc~2~ZMphlF3C.[n|k`EVJF;5ʥfJ۾8[ȼx861ٓൕߥs sxNE)3M$H:^wvqpӐEj#D{hjm`B jq sIcJ2YViqX:`K ϮOKA({UcL)'u6b~ަr~^.]Aa+C}ɤ$X}Lihs#[Q / 1*Nq"6&L܊?R>ULB'Hq6ݵyXuɪ^S'ISgէꂣ.Ngt 2o6]6$# WE 7+$ADk14z6KgfI۫!c2v}n0)3 ʚJRwJRw1. vW.a#ʍMEYza7Q5%g`|*{k8ؘ"~jn gQg8^KO &.t-2ZEnCj.5|\ԩRv.%;E$3nsGw`@ 5Y*eY_vKYR5#PEt,|ҝ·{7Kek7M# H^ &(wQDFRh%3,%H.eBX́W\GDYIf,nJ _W::8^c>vQf0?)՘%1-]drY$AmnWsL8s'R-,c+EI'(hx >+SNCXw)jUM=̷8whcƤ5dmS2q!2S$(/ݳ8kYZѼ=n-5Yq.?wcqY03Vq} 8$2bQ+]r]0Ta5jkmCucj,gedbN =;u;䃪Kf,ci7̞]=Wz,3{;ZDdgMorцeWuot`̭qbaFUtdmkQ@ K֯sͽ س |&JuS-bN /%9άssū: K֢}V/K)ИUc0yf77iikY:|\;Ag\.\`4!W:Fc(a P4ii<"_i;B́޹Bt#փ*S\+pڤRuɗ{.٧RMh7ELcX41%3VXjTI kנT烊׫u7aSGգ.FY>糼I{C1\H1H~BưLZ0q&l;Y +~< `؛-{v1\*)̨@UG0![ /3pKO$I{B4^.,]`s2o[֬K쯼{=ɓ&3d2cjĉlBMZ\f }4Qv_=ƀjc7&3z8sa'uQ2cJѳh7 Zھ59rg'8|=#cĤ P hL1%ZɭM~V\Eɽ.C_l6,fl$V>c*jZmIK}V"[l[Y։(T*2TKX97UL"Fqh +7ii[NO_H8ʞ L86*dw(?^vc&Oj1!Mn–=4ϖgggRC`,s4S'F! I]t* @Az7EРQkr\/H{ZD}Kfo]1!h4´nFTߩutg䩞 # 2JXYfƖKKÄ`Q ZvWX*IfN+u"Wlr"w]|9W.rwY1^۠[8=q"a7D,Sʙ|nRWID/Ax$)by{̥k8k\}N@]D@?_}u∓/\yUjѵ[HD `" &?9oIFv2?´Jw9᫨wo^&˳$DB\ ~n/TOh:v v{`RygI+vu6׹Л)lL~9An'NzY8^9yUhm|y$M,-86eilzkyY2XaoIUw-*asVΝmKԼVlAxA|=pȵosH<˜BέA#$-f,;yB@h7Fِ#6┕&3q1]\Uv pO("gkT0qv{_=w[eU88Xi00q:񋠘d0F 2~hKcGǑq\zJKi6/g؛ܳte:=Ë|6MHDxl|6] Cua>I>|^"E119p?q0^Qh3YI7*,%lf%"ǰ0cI5['>ࣟˉLQ?8],m怏`XG`;ƨ{P;qtWE![|tq̢26wI<#rķ~JRMa>Wq3,xlrKt8Qh l$AU%ng̢ ~9 4 `$].Շ 6u?e2e/rrkmJ9{>K;r$2S՚iqu)O#ȕsAQ7k-t=בkwrpudtrë*n _J0ogÒX 1mϼ)+hSL@Z؋1 KI3/zV\ì fqq::`, +:.(V9/' bg fl4Yi͒3+Ae̗ خKCN!:_7+,Zp9*$b6)cm ;753:gL9rvc:Nf`~WZMRJ8IMn\I db*R~IzumL Pnl֖ 2k xl[ʱEG3p𒶦fb/ݱw{?<ީrq R_ٵ#Ф£XH"n49- iWFAiӬtvE,°+'*$wU- y4zQ{%מ|b;_buC-~9]V~|my|UL)MuNbh0N7\·*j @c g Q%׃!N a =n`sЎǦf!@hHZ+p;i7p p;)#nGk:l B"lAqh@'.7Z~"V300arP҂1$575a\aqC$ j8Y-CаL_s ,逖_vO ۟L <+Đ4΀Fwg.&¥6Vaez'p;uKH ݛUhhnV4{!NR˰}! W3iܺ;LR:EP>5([˄J}bp&D1M6syB5S }C6Y4Lf2_a: ^%j4ʠ62l {*YLe'h3}D؅vSYdBIBA1a1 E߈mm޼rj5)`.iJtH&DڐĄD`x%nvHIҗęA$n$5TUvVi1Z|t}.(6_("QBl% q+,Hq" UĹ!F]H"-YW,dѝʃW- 4aW#7aOQKaMCnٳX7IJ#֗=f#E"L>Η=!uYSɁBAUIdAR$omN'j DIm-(rJ.⼟)xF֖C5-}pw`#}Ł1W;Sb@K{$V)=0Z~L IZ]QNNWGyYo$=ZHўmu)a,@"tG{h3"y'`ş7 ccq(h|3 &yH v |(~g7>ͯkQ{Y~_YgmxgLC:dp'ۦbDq$DŞ`?| Jp0>ݠ@.?2h@RnJsU L).1gfJ]X p#TFΒC`A}IC稣aFsIkozV ]P?iP׫&&0S1E0=( R"qxϡjq<ňY"A3k߃>@e-RrTe]??;=A0:;(J(zr6Eo\Q)g0(s4`(=[,z2g S0¦lqȽˈgE{!h<@Gms^j)B@'E|4JkϿ> T^L`bn˅2@ƾc /tbg)F-L`|`t b%lP\zq)E9(qflfѥ_zzbD!9;K3LU͑leYf\ktFVjihD؄Ls# **%i,!CcB\4k9rQKCW#KX ߶f4>U{Q_(bILј E$apcb&Y,IDlM aU۶Vm j<صsV 34F"!RD%X+ FРLfyێZqFȌ]uΠ}C j Pkbc!!,Q!'XCR866abQl8 E!km;jm;^ku7jHP}x0!pJ}y>LP| Mf$^Jo0Ah UL5|99 ;A@>Imeϧ=.|9CF_d$]rYy`vfsP@ u^ی@*Le0>M Ϟc Pש|dKT"=gԟ[kMѢƏ ;j~U7` \SI`DtE7d6؊t-b<~~zO*aXm +ʓG#g/_lei_[C+on]&PכPQ F+ًiU~pgw9$QƾDW MQ#Qk^= (lk7FGQt7F0lΦvx Hp~hrXefnbs5d(@&%@~ SZC4`M6Zm:% p!eN=>w }NkN2/RX$fajeٻ6r$a`dGc v&7af`XfWdKraje ÖZjE^,V" Gߚފpu.0įRnZJ/_7Lo:*wd|ܦѻW@mp`TY&eQFlMqxD#ֺǐBzca^M|A rԃ-1+60J2T :Áa%BsqSyIuYњSb$&?a9;L2ٙ%^m\wƬ2Ww`gИ-3I6bqʩga}ՈU1== KENrR*rM_S5Oϐn09udj#k>|nQ`k1L'M. $)Fpšxnyg$qqJHpn )Q(ϹRk-UM 0%u䓸$w+$E<' e.,L ̜)rBhйN*΁F䓸" pHdK+*;8=aC 7XiIK~ L0m]OD˚~\rw}-JDH'BuSQNEpX=ZL;EaDx[-̥/A;k76a]N4Lbo< * G4eG{A(Zk@`xh  ĸL ynB7\o)p#>BH.TT"n0 y9G҅ps L`s\4$$SxRq΅zvօ`&ټ+$ 9e`i[j)aQtHjpg@ GwZFX#5qAaSǤ3\()9B7=| 坲Blp) [LvDl-#W!?o0֤3ݎV5}<<#E}:3!P+}2XΟgWs3>e:t׃9jHLSL/l-`?`V̟hB_xwp>y4e\kX'Wow" kf,[?< aEMg3_S3Q]*R)#Q_.W|L93WADT #%4ZTt;|v2r?m\zuDŽoƓ+3sf0^, /⵷ztKR,?{=0v%]8ݑb~\Gc 4yO&" yA5b[UX B#J<7\BQ g\ƺ̀$AM>aT[B=%Jk"TS)7N;) oF>:DS?E]W`IyˆW\޲WkV!L?ȯpt9qXsn-Tr' ʋxty.Dz V\87K>g^GkLT Źs7&ࣘt h=a NIy XVL$Hu`NTNIFH,y3%Kpr@wuGOٺovhvGeI1ut(NTTNII.0=U|{[>ϗFdiq 8gp^k )uwII@rGRZ!`Y[3ct\r2X2ʋif4Rj*89nCy0X84F$ .z5H%Pf&Lp,j獕޸\0",/DL0haʕagj$aGw%RLtn@k'HwJWT"ZсmIukzK4%yDӛ2)whk=5{ (=M 32Y@R˼B &ue [[@֪ M,TtT3Ir&@SPs\./Q̐tJ\o-H+KLr%w&z`%qR= f-*֩@ irZ_K#wC)$mWa+w'i](rbXK6jA5>d}(RkIdNIqs}kތ..eS~ud[Sj2t#vE1ZoW$ Va LdEpsv>4|* h9aV! .UgZU Қ.D}.ඝYBH ?pR6y@`~lri+p [D*vLlM)DB=UL0$؛m篮W"Z&l&KMo0Ɲq|L6BW4"5 xr4>|y|ރp`?W[>)޻R(J\of7Zñ5ː #yi'$W Op g![DtOqEz  ؍rZhBSGhCH}px09ַnz3sêg:> barŬՕ|n>^>Y>SW{AjINI"cO,Xќp[NXzSb%Yă3aGv 3Uɖ6`*u# &-Ǚ%ekDyIF!)It(I+QJYn pux&_>EjXOg ټz'|b=se@yh<]/C^lp@g}0ect63&r0û_Rh$ s^{`:~ToY"@Yj gP6cy 0v,PNj,mcy/W?e;sW?J‹^'ٗAos[BoCҏ$e&3̌ƣW)h6ZFQj^—·W 5Bb0i $|m2=D_p,]]ml׳,h@9X ǵ4dIWCCDo\KN+Z 3tC@*Og8, vXf>xA 4><-mԹ}t>T\Пʼn8c7"$ (;n-U7RV`l|7Ʋ_ ;d ZYy|nfUH&׵kKe~{JTOBO0m)jLTU{^' $5#Z&I!(/9BjnGxH LIiE ]Ԍh3~C_!$7u @i]L.aHV*]D܇PwmT C+~#njyi}U=NeZ3`:^9`:Z3$)z|QcRjhGLM/ڑPVB/@~-5Mr)Riț sINQupݛX7&C7DdȚ<.,]Z1PϛLΠM<O0?WޕtzQ=EsY<2[u!wu}/@`[+` X}1iB1Qq.T %Ϻd] ?uwE?d֑w1ƻF < Uy߈y)xO¼˒ Iܐ}w4r%j]QLNɞ&.` i=Q==~,zVe;[I Wi^)Z8on_O:^_uٗ'3V"dz$A_xwp>y4e\kO(i" cŗ`x2C(?e h,@<*入r["Q_.W<6Wxd: ߯(]n3&Z ^|:)PGӼ Pg/nre؋pvxa4ChE]#.^=]I&|Lހ8 J{x=|q1K)y"0+fǃ݊blP~7'QzȒQE6Lr zwl uSuD j4YFzcIz{Ѡ]{ Lz|}1>L5z綘ͻtmwp4.n!#H$Q%7˜(ѡV0Uh]v0%EB@A;ǔv5r4ڕf}r~~2]I7=bdD˭`.b5#ζ7_eW Y|b rY`Xw ,l{Hq{tKMIc>{fjWHwYS1q?'bw9ҍb텋ռE...PF(7RRact3F6Ը*' Vk5SARI|TĢ.@rVܼJ]ep{ԚkSƏkdcR-۹&1qOeRp߻t O h<x Q7.bzuoz޶+A>8&]g﷎vSp7:U{e)J[I97'v /֬}TՋ*>M!n#B7DP/xDܞC|B<{9<)OGS{SO :4T0<] "{RN^Fe* _L goݪ\F "EemA"^mo{f{B7l/@~vV[m<XQ1XL1[^oK/ EcEsuvPW+\M8>8p<;^WoUPbv_Ϻ5YO~k?v+ب|0Z9\Ĩ$)jbl:)Si8j58 T#η!5. q<-,#7jIW$b+/PYC8#H)kaf_UfSz:ߴ,J/I2_zE m(eGZf/xN-j$k)!ؖ]hAh{#ƪ`b@$hX*ir-%^R gEC9TR U)*Wp>#M4S:i3Y8išC716SV MhXiغAmz6]hz7p7#DjF0á p3SA w'M2Ss4|C2A#Ik2 jX5~+hq@}hn&:t>(Lh{`=aC J \P ,+59+aC>"c@W]w/Nu}AHZ4m[HEIZ"n:{-pA{n ƐnXfN06 TA_߹jTjctA^#W`(5%>9XɧfV8I Ő =u*4,ʚ͙e0c$ &Ud*H9 aEo[fCxW26O)dؚT=D_ǷC%AtEi.HF}!+H1J^Zǔ w.uLyϴoTZKoْM3}6V HԊM,װq{l``ۆQ`eő pċH\9reh%Pȍ0ucF^5C؞PƵ4o@kCJiJ .ʂdR UE5mub@Z-nY db0PV%Aki W.$xw.5Q,O)Fb=DK;y+C&Gd3&drNcvFb`"a)6egɴ -K@)z Y$#H1,qNJq5&o8֋]^Ƿy3o3۱_}I4ի7tGQ38z9kjb3~R)_)ʽ|ZzFi9NF%'xL` ]](@h`/)YA !oyƷe|Lb@Poqdn2H1;=ql3G8c69Z2`1:إӄ#ڥ|/_˺؃7|';2Pd&o,G;CW; c5Dn,rM)\0C86Q:[r/?#Շ|i.ߜuoIޟOݛo6!?iw~DQ'9h/vG5^/?Xӯ?/v)ߜ9gO~<:7uu PurswmtޘƩ.jARν8}?z|= ?ߝ'=|oW?Qbn)gwU#?~yk3ys~?khe:9zĺH\{MlOc\ȣ󧸸7G t158[Ikx.}߫˝pǣ y?˿~eXʏ+~:/n/ٻHñN_E9{Lc7f9[Faܙ7ƬY|Z^\\ <u^gʠt ~K7֏Z[=2ۆ/j:b-p#Zu'$mo'#z S=e|,<:ZBG$,W ܋9b9B Go2 +hkD*\76뼫eD ľe3ړFM),0X%nRUS[_SI胢3Pk~5}ׇVmo[ 2ƹ(00jm-faj2H1l9S㖝ksgEq`+9mY936#H10u:HTڶAPDļeVP׬5pVbFbh_Ǯnb86'zl&-8/vPmC(>R÷; Zm'PcgAlad,jdbPERmθٰ-[f";4PRiV2;EꌊukQmTd=bX봞,IzccU>dcÕsqgfM4h`g%HZÇ *mx>mG\/)4ldRM,Y ɀc-nչWrAۡIcœ PC+5l_]jl<ֲ%?sFrW΢db(␊j݂iՉD^6"K#ffomBFۊaÊ!dyzCa]/JORѢq~Hg,> UAAEn5gj3B͙ə=Ҵ`TCbRv+\P]'Oܹ"uSăΣ8id)13up,ϘQm+zF9Ha,zb`S:"R )j6qJm]TUgX-r9Eڌ Ҧ!(wHs\<ݪEIh2(c fj2H9rF8giQ KF%wGPH^W+V 娜QAg;e|OJ$O)F w:Ԭc]Q{15Z /|<,$DRCRUW [#[Ow r( kzk9cAwTo4D79CX-6p`Oa*zs/{b#Id))pαB+SFYa7͡d!-AB0,B!puSTu<L2Fܑ  yqFr̨者l&ZᏰY \,L)(6m9q@i,Mܶ(@ɴshNZ9CA0n@i7T oj8"aBo63mj3YANFW%|}n>]G ֱKL@gH 倅 `jzG(|kZTV=nQ;`oLYA4ZXE~LәmsJQ{V?T#:%r6GA4)o%"`o_+ײ|Q ǽWkzPTӛJR^=[6'$ 6c pEuAKZ+Y9>o҂bt@&ur3h*Xk1VEdJlgj?Т|#hR]1rNm0#H9Z&jS xUk?{WFFcY$C}g&l0~Z}Bdɣ#7}lݲ-bUHRL.I3"2/2x7҇n h4 ̗;󖙘C`?(LƝv?LX\o?c1*620$ EK<3f5q$4>8- %ԞhDFV0h8U#(z(o'h(2))A ea,^ëD0%XnAb v50:ayޅHٶn4M &?M{p_~ E_{d:NYމ{2{Tw⏶U κ~-L,$q_Ὗ@(X5Up pz*ޗYMHKL/N7YҚ 8_[,ZX 5;^5"%aW:Clb+CtA mpA;aVq}xWe2Q`Lo] t1@`kĶA"N^_ @I Zhh\{(erFdTh{4,qny8 9A>i:0FInʢIϬ>3͹ 7MgYdq2T2|S]|O=}Q svg8HY.xE3;k~*=iK X v|#)!H)Tun*H"oFx'VGx#Ę-L,aVȣIL&]cݯ:U3yM{|y",~D5Gp kLh'OtlED4Ax(Ņ @y,8>WS!z=|=vD]<'MT$E8n8zIkUkULjR4)lq%bnIHbnnD<Piv2Wo`׽AԿĹɰtbWl|+o>{?n`V;6nLi}VcQ Bnŕ XYe^`JɼYX,|ty6i0aa޴M~zs( .1% oy:eFrg 숛ߋ2A_iQRDptTГ ~7gl 1$Oxܥ\V""yJ m ??n{,W@`H xYè oxjy7n'Z^ۦ  p6_rM8a5FTmDW+=!CajTxV`BRy-I5W>b2xս_yuw+9Kt#薈a NT"q~6ftS!i+EZszi4St39nnaaq7fY:DLo & Z^31csYgM,Fw;/^$U)Tei eR&\LQZd99]ᄣ,' &&Gg3lF\\ζm6 Xs saT sb"T?#:$okSAx%Pjs~u9 W=]XlVQ*l7d|c?E>م_"hlU}+tɋǿ-fCL]DG1'?H84Wahp]8cJY;Lu?ݢ"&&ܓ D4<9nӕc{&1*5bYxc9L.]cڔٗ'|:OLڃ8;#d5/9&5~RQ=)Voh3 #30_=U .a}$r0+B4bRŠDahE-q-$W!Cc IR@#UCcf|Hڭ/15[3~ȸ4tCP9>"#[HaQ yƨŒ#&E~qO\a?i,(`9wg-LQ~ݳf]EV歟u1 ;if|(eYWWY嫲J1+4^e)ɶe 4c|Nsr> X\C4?K9_[BX5Ac@/EИv\4%d4d>1ːcD`Vc\J%h򢟧Ehe}5G]I_p vb\AW2㆖(q-$%aaZdИƨR#y8qz^=TKLEgQz a9z+51FFZhO|/1I3Nkuk~Ϡ*$ qz!N/y 1l81W)Eljh14F'CfgZmC=vm}f:f1'D8"}d)H?i%yLUQ"Xm81̄0ǘ\&LUdvG5v1I%!Ǡ,fb)B!79Q1GU%j18*"X+c7?T0Ǹc7& `xα.BFLKL,X U)y|:;;^iD!0"عFFgx"l4&b;;;9AHzїǹAxX#X`5EO8Wzc]RZ 5͈s9"YU%/j$+l=n 6]N{22W*) g }ϯQoEG<'IQ*KyȊsez̡먛&DuyN1d(,1cmtѬKY䧸*JaZ_ϙIDT'cLFoQ5&Bx]iRAω mޓiǂ7T0}wvĽQQʔZCE91(hxP(\KT t^Ees@'Q(˺ DH@\GYƨbR0@aW6D $:_qq:'HQ&w &lfc,hPTeձ&Esd4*L|:ͨ\\Rziiji3X唊zDq017 (m5E흔BNfPyI-z ,I͕n<ŭÄ`?#kœC yfW!/ҍϠJ+šMt*ݜgzF:&Z XC+pǬW.mubGc`ETb[ٵw8BJRaK|1 ,Ԙ ru@'}֟#%`5'|Ҹ-ZHo!B*P|&ǰgSCo>QxTRB=B zqi,z^nHP5d䥳>m,j猍_|-5v1"TԐdzsB_qYFpERp,5fr8:hL:n%zRnWvu/=KÕ!bpKĠr\6?PֹvԘKۍml7Q!b;FEcۭ+a&(B/"\ ;@`A Y#1DY`͏irUcR`>bbs4K$<4&AEKC2xoѤ;9?_s'h%^-TZ킙%<-1F|[?Һt+v=uB%Ռx2ep 2X9bGâ 䎣]fK5"R;%qܗ%9mU4!~D֨ics-5ȿS{X@^@,ZЯXY5%!An;36z45VTfcq/ 0br%ƎdVs!Dp+c&cs0B<;8RG8MSb_\nȢwZ.Ÿ&@-}բ:j]ݶ> ՂsX%"g< 'tكcâ Vs)Q%;"lYҰ񤼋U\v^ Hg/xi7Ol( }.cώll_jy4 -ÙfStMLp_}Ea;w>Xw󁚿AhE4ћ$t1K$m/];S+k"=ز QyˌZޕI0͓9YOX>Hj6)O0kȮЧ0Ū{)Hu.O¨I0Hk%s{C")CUX/9Ws4xS;M=K"kԽ"]tރ -DO/NOWaoN\}@ԙ<.qrp{w?1gv>Nm&?N6cTNfN8r`B|(w_d {6j)W}9x=S7ټ`|>q+^Tv//,U e&g |Y_06?O<4Nyd^ICDoi~o?o;Yù}٦U[Rե)hݷ1ږTͭjCt%v*oQzT֊}˹_aH6ȟ (dh5Wb 34cMS-qQ͙s6wegђӟ?FY9UN6HljF5C>ՍC:"2ѵ)'ո&IJ{]Bu#K$v$Cbcf˵XAЪG@#FPX&Q,w:s-(q%CFpF$&SükB(NQ ; $Sl`JNuB`B#Ndu1j[''RrƥTCy;Fe32x.jS*.F|E()}Cahzb}}ҕҺe{J4 v#31a)<vaPYf? y;F]Qd=FmHAZ˽JR{ƕbbB͘V)g(yy]5*IXLCld2HH<\Cܠ.)D㳮=ΡE1HJ-B2KE"rֹs"uFIa,;e1i)ς aAEG67Ej2 ;D,J_念Q]ޘ,T"s\GCLEY &=@ХEGւBJK9j)`u:C%X?hࢮBvfc9phfk*( •QLPHysA81 J \=VP[Z!|_-h_AԮJ-a4Zw6uӥ7 y,K)*, 3 #ğd2[:ZR U:Cy*h(' B]nȨ+YBIICi=MM!=h0ڛPb=Y4 \  5{me b/>'RkRɊB‡ey΃E',hT LP А킀@$X*#/+jIڡ;z[,4@Q@6%A|̓ULeH!:#7(Ȯ ٥epe -ɂUN@,C`]xgQ nە-S_(f@ܤQj5)Rja`CI%Ut0p}Hjb!PK2G"J0MeY5l3bHêrHAVw@i ԙV~A-C+L@;`'෌22,+TbRo%]@ Iuaۭ%e,LA$FTot6hJGBq Η.&B*EzFW\DQ9" .$&D-#< 盂 ʵklTC Sf'ܫT4YHȲj`鴨^Yπe-oQ dEtVgBN v` $8FT& ~4hwK4t51So"s@C+8;= AOh%^' 9Za4a4<N^\mGM3a|R*+z FCԃѢbt6sB)*kHVdrPO%"=sy0 I EJSur/ȋV#׈őBŅf:Si0bfN̓U^VЏm5Z2'Ϩojq]ncHkki@nnumH:(Nrz$};FJp d7)ь9FT55P6ɕ}bqmxbWYDGM~&JVU+yF7n]##dl [tgje wTCX4BF^VFԄR4tgFր+h50Q)F]:4KHIY*!,ؠ]'Td+ L)Zv~}W7qj6uB>.C9LHLT4P=go}R \AR*>~*. k톽{?:7~( 2C}~(#׃jjjjjjjjjjjjjjjjjjjjjjjC{%; (?臲L`jjjjjjjjjjjjjjjjjjjjjjj.?; Vq2 tj>v^!~l>nӄ7DVo˴#Y Xo 6ހU=%X<XB Xž%~oxp'Sa{YSR$A}`NWo꾜G3ygN{{2ikZg4)'`3~olt`}Or"v!\~,iÝa`4쟷0' G߸кF ]9k7;zQm10}7x ໙{ip2t`9Ze N\XoNK;a^rqwl1M^EO~z~wqn`yo_Ԕ*,*MLG$}5\t'_l'"傜V!ipѹ}i! ,HO'-ĚhM;]8 "o''Wȱty%d~!78ty'WߨvNڔԉ((eUPa?sO^NWG}IU}DEiVj[,߾;ᣏO TW/:߼6獭| d >ɤJ5{ߔNឋ-QsF.=_eS;=rH'`wZ?[k[:ioBp=ő;Iȗ~~Nnod\brvz2|51+ɯ*74v籾QD"A0='|_Y$? wO+OS:g/0NYr_V;]3 A#os{쿳wmqIWbvSʌȫab<~_އZ)`6{٤.E64˓%6NeFFʓפm!mϯ'S~<tt-taߞ&JtT~[͌gW% "7^ z%63e*cO?~I7~8^|yzw<TBʆ/lgK1裉]*(On!`lXXȳ @ȮpHJb|6Cq9,2G RFYz,@a[`-ؚR:CP kdOK6:rP8YXdGlbAu G^ok_̑Dd >ZsJ)^)bAiBF?[3xXom̅f뛫tzVm'^|o_d+nÎ)0=WA~`~<9`? .y\g),Ilw`J]`bB (5/]B% UpG9=lT!qm#EwEZY^J(B&B"vJNKUv)+hKl6jk/6ZcVR@eBR|pLF/8+I96SXD29؁[HS|x⅀dՑӚIAj)` xkk{c: Ap楘KbB{%c9b[c}pgQxMDk}%;O{l):;!Z } {?0Pv#mrᔫz.}^z7r:N+]޸Ht-?'"woQojs%>붑%ݖ2U'jHw[od{WOxn;ӓ\ՙќ7R7ۥQ";/ۺlO[,/E}x=Mˊ{Xm'۪sϿϽ[qmO7eݕ7^w0^ ~91}i߻u(r}p}3u Fcl:z[kS+]\\Emw7Q￰z֮nj}ucG?mnmQXk%]rmtTWDݞ^]sɐޟsuOa ;=y;ݜo{jWjVF;zW#^埍+xY>6+p$w(n48|eVݶMnjt-/ď#;}9STi"2:=;MSzh.QTOZ8SԬ,$JADP$ )o>'2Ygxd~"?-I$,h7kFHYFAwgt\!"۽gqKEKʓV]xsoɠ(5!% r-fh^u+#,$z 5tSy C+R"@BcknyMirl09—,[PW`)7R5WL`!*j:[g ZKCh;gulM(j0]mf%BsOd$ PM™c -H:"Wm ,=:Ѭ+t k ':L?`JTzrOI A1mdе*2gxxG2$ؼ%.MK(ztuSjZSG0;by_?H4Xf3a#؟d4[zvYPETj-ȃT!2=T"Ɠ@X&Xd Bs%3 # JHbD6Wr,"@ϑ;Tc{/"V @QP `DDlYgQ*Y:|u ql(Ǯ KhZ e'zAǓ (,Yw~p]280e$ 耛zzx?V\ ,5c(H%Ut0p}H"B8FdT\Wy ,ֶmKNfU9 qw@i 3k_P 9`' 72,!*TɤPJ@$t$d kg7npFgd2mg ,1%<\#:d6<0"g|ki!03q ͤJBJ TPdNq(.Z~=E<$/ E mК|p`]\)$tٴTZ3C,`#R3*Zϛm5B2'oلfRjm+od,"" [8G;k&QKW+YyU'yF4n]Ø#\@3YJQhB.+ސ%:Ep ;p4ɴ?J5&g^"+7Aw[H X|pV( R $H% D$+S&&;g8fqW%\ {:t]$ѕi^e vI^l G$F4D&!cDWThY @;4 O6ҡ!D*)UJ$ lD 23 tSZ/Y _-WWZΠ[gEwf\7g C, 9nou{T \AճR* ^B\@do?ݞx\\nvU({6wBt۫ObL~9;r}ž!rrs:jƙޮ׫W/_jz4r&/uWׯZə9 ɇn*^޿7_PDDJԉ2V/E#h}|JAyP )u /0:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:Cs4uxԱc\RbL(uֹg{ŇR_H㵢0:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C3:C|: Fͦ 2w XE0*"Rg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uRg(uJ-}756=9R=nw ? QWurup_S[o߶M_m¸mּ{۴ m V,H?E!$JD27hbS9sΜ;+π/K^ #A'0D_VHЩ˛»a>Rt3Q~' 08aA\pE ?QO!E_<̾<Q IŒX0დkAkeO؂`AQ,z,'2 O|lf XN8[a۳?d|7&^'a`{XYg'IpyvWlkE7v (FgZdf pa!%q, Ueƿ cq ,0:m_=v 8q8J8a.-`k}-N!7riRT_%l,@hyo޷|܅G2~ ŏo~<(5'xÆ!FPhx/})Lh4&d, G05ᅌNAZ)Vf8onwY48[xסq0Iqky=5鵕?(ozDÐ'A- |SyS~&3Dyc<[y~2zv'D@.}m}(?-s'b[rݗ%0`){,Jᾬl^Cٗ &/++ RXEpVi ,=GXL(U' pR6pUOp<`Vې'*ޞmj@")RHX0 B=R̕ {6PMz|З{I`5e\^X~(?uUZ}`1w}` \z,R˨B'&SXؾ,l/"$C7rV10uO +qڕ5爁e=v<}0*{`!R{,cԹr^ *시ULJD{f dM`%h0ۘ'b=p$NKx'2Nȉ在+ ެ+l%0~Ek_:srkێg/y-Kg7*;Ќl{ ltNAd+_U@ &-f'S`yW֩mm'ߧp+} *8HrjOOX,yv_758:Q@R--@:ACйFHYGYXpEoV,dޓEY|B!$:.b|v4J$n`ioĭʾ[ O]Z(K_/ңM`Q R-EG,YodWR{9~BxFT$:G Nc,{,u?8| \Ol-A)Xh@!_Ջ|=`_SqrhV䥤j0PbU+ RqڱWWۃOgN L?5%m3o UYѭxfnt}iEpe8l5pj@|'+~˛AO@OVuy{g7.2u:v~P>||Ww7Y>-+iSnAVMU& `xJec:k-(B,͆EE_āO٠/i \ŶS!zR+)d\b*87{p&ҊN9I¡x+šx }[](mڵ1ŽiЄuE}3>0tݱAlIMR_pd|OGOm&p5x5.yD`1no:čW=þcĥ"3"s!wf 0)UY&̴tZ|S1J tH`;-#jŸrgu 1l5x< a4M3Iz ZKNQ7Ċ.ͤKm!Oڧǻ=I#՛I .!})wi/X_@6}vL a-9Evؿ(Rj\(2HA]qz^{}o.F^_i< ǰNp:XZ/e7b·IuOVoeKY6&h5  ;_jy>W"Y>__>W^tpU1;{Z\\w;\8ӇEb!j(u9 &QA (>y$ c|5_lv3i ';G.>.. E<}E57)mӁ)1:"~Qd-' {A>90o~Z.7uNaLoa*Fx–!ǷQ%9Ky!?y6\QbMuOMV8߆E%|85( }\&M`q)F4Y yj,-6m\3.f3d۰)ɹv2탷&^ [ uէy ƣlz;ؿû/ lB,+36y,͡ٸ}%t ^m44?'T*bv+9΂|?!h~QSfqİuc3*N獵ŵf4v}]' sWFjٱ6e>ąβIz1`"l_NnKR;Pj9_`` Tc䩟qPrr[<?*PZ#\rkexՇQ~簃MpҥN=8MX\Ky Wi' K %X;Xv%SK(?vui-U%=+)ڕ|ܦ}Im1$Īmwnk5&"> )ODW."]+jeYtVӏ~2'qRyhNї׋nf?dݕr 74-3+U !#~ u1#R;u=@aKiCR@1/Dڧ:Яf#O';_qPJ5GtL03:IӤ4xB;'T*,Ӊn܍ZB&5uн)rLOM"X\K`^~[Q*{U ҫOepݡf6Fw̝Yr^ځkSk^Srd<Xȕ<\fp(*#2NJbD:3T 4 wPeunGˉUjuڼKs;Nd-$z#Yiumئ6EjTv+y5}:Gݵ VuXLST긇7#Q̂¸ 40c⤷%s|/IR7 L0V2F۶?*aqi>J~%tGi;XthhL**lCҖ,7^sՑLk2]Ke;C7tUrCu7w"v丫Tmǰ9^w6R٧j޵Zujeah甆e(:k=t~-<(ΐ+wEjْN3祥rg_кdOѴtU*V ԫ֏JkU>R8yIka#K{sOl= 8lfA0hn~YjeQ/u)޽|V=}3Yg'Na|XFN6Dm-yϏs4/XfIo'd]خWחggES˼_70 P`xI8X3,:Ȧu4y/M.;<1 ίG9൜ Y'3Vޗ>YWsĶmC (;v9.O%$!oJA?w!,D WE4ǬBi>*zVmk.>*bk׻I1P;A BnǤbד‹Q'D'-aA|ZuR$#ɵ,-s@mU YH}o) -ԧEGdZ-s<%K͝S~y#tby؀k<@Ch(Acݒ/XWNʰ϶R駝oe5H>*l:KؒfUq ;^\bCǔBZ Ω.okiT +.Eb\Iko)h~kE [+ŭL{*In>$*OS$$46G56y;Z+I;7P1#—)Lh=b+:;4y|^Q9\ɯ;ʝg_8}ۊ/_Ψð(]"xK9 RP祅ोC&AMՉu#ITkD0򄳛>}Ç@Jn12cWB&`8^| 4…_{> [ _5Zho?Wi鯊m|3ȟ.x){ȑaÀ gq\o]e%QKRzSMrȡȡwK|Ś_Uúd<{NV\JT&2sciQ =b>ާcAO\4CoB,[<̚O0[qZ/@yV'}$Ƴ~05aoMM+8(͝ʶxw8og{sz5vh&5?+/b U۪Ű.'^:47Jde4М?qNUNVeھe\el=Ԓj?N[}g. yu:aW^?ΉsJZr ;鑇B UL:x8/ܛwo>=5GOI'ުl5qo8-{\HAsY i[-g _Z SN.]j,' &\ZA:8t{|8B}bSRdRm쌧NY\\H{.lyχP>^_ȴOuq0E*rq.:V{8ĿGG-hR1+h#l6LhjQJ%p4̞c%W~1iAaG_~҃lb[Nۆ' Wߓ _>(j$-e?] 7t=|S5#oeUB J{AyVZF+<&F׵}By@ʆKK|L ] w/y3TtXwp3#ϥ$ \) u_&Ә 3szNl(8­D AV;'^۷ؖk)ߥEL.xYdcQm<\g-?3PunGG'_ź~y,1h*(4Y62x.ޝ tsbzx<.2Cʼԛ Քb5/B4zgQڈgq5}xIgiKV&9W潽}*ɖL eRy+YJJiQka6`Z .uE8X{u+_5tx7'lnH~L)vzKb]o.]DvgYQ}=`pG߀ų}ݿj yo3hM{4y4 K~1EJMUU\_*EcR6giVʏ _|~$/JT¿L`g}ag?۪WCs{KC[簭gC oVG 9]گ?1q B?ԃoTS'mbZnn Rk*msKe׏W,x}d_Dzl7Rwn\_~^=xh2k_}*$ V՟U陸GPӦ3??eLOvd9woWbۮո-e]]Wګ34Vn)n:p*Ε"5VxbT{A\Gbڽ_lo)xwalL5Xi%-::IiiEdʙuʰY.,& ȡ0ʝ&xw} +ݹg3U^^ۊ:^szt2+ℿ $=}npGwl< eL05Z2+nOH9sTV D4J^.1aLףMz<ԢdӎH_[0|o+_"gҍٷ}}浴;MxړfZ>a/&ꑮ<~r/3Xf{-3Anm2E8-Zhۢ 2_>ZvSAڮ?/η&z&3aFI"H$I({!DOVP=)LM~] \vAQ`[¶VҀS< btшã%ry_7~rSs`Ω"q\"s)$88ڝ DJ)™B,$%SPuIZb:iJcF-{`"vmIheV'-h &Ĕ)y»1ٖq}EV}LU2H:tʡļʍRFmAz?=FMKN5i2#Z0kI16C\LGEaEc&Pq\pΒdZ#nkpy?1sJ. P ,-2V\kJg5te ѶtQ}ˇ9h(XE)_=[BxNYJ%yF3y_1kz>e^g"vc_ssK. Q1ga&.{b#$g$pjCm3ES!͔ZpsKs2=F-2O > b]6lvhtv"(s߸,H()L=k#lj6`P}Eh4FfKR9#M2pnin%;PDA!@N1Fp~GIhhlo 4*Z١F/cI<}[(AFϹW:*n_:@;3(zO|4p)?I1FСG.j޷ŀ2iTS =˜/. be3GͥW)N?#gtzU(=uI)/r.ӠIH2bpМ` SV<ҡJW3XU){C-t~~c{1{8)j`8^8'Y,j yCev? Ci1Ƭe^[qGwWBrNZZf"r;sK`߉")\Cǵt1ZcP 1F7ۻ?PcQ ;;tfCpEELOӴé[?\-wdzDj*{1vYơ)P[bᇎN0c83‹w k2c'wFp&GYY3r VI%jLёuvhG`m <Œe QJxHVwLשy_i3j%! Fqfti"@Qc΅ByTZwKZn,jKo+_']`9m {z] 7"~ 8XW#o_^|w=^կo=RyՒۿ7@TZI5_1L/wsWAcHt|UtADPp^kP$Y0Cѥ0]t;ES̜oesyzsD)6ȉjd2M$DݕcюCUpb;F!:10Íx/|3yc6X=ȏP6 /f4%REElq(s1ҨDwqj!6dQ=E.xQ^ݝ09vT(}6u1ƈ'iK$gGiZPArna{F{ItFѮF")r5 4|A {`)vvӮNc)hyaxr!+[y> 2x7?Bܸc٥ |a>#*euѢL;Xp02%뽤^Ks;!;{~W`_ 3 b39L]FN_u9{uo+g;َ"rA;kݰNۇÌ$с_qcKaą#Ɲ{s-ԫo!Q\k|/w}Zw~m "o n˭g/gƥ#ސ;śʍ)U⭓DEZAbؗy]E*=k#7rEO%|`$%NfgHZ5k=dGkJiv7hRY"y|/^rlk9xͮG58X`e+fIEʯ:$hstA7nJx-~2l^j1 '~@04ܷa_NZ Ѻ`=㑮T|‰ 60zW#Cr<%IP 2䠳FnvIee *40\)^dSj@Deuğ"2 .HFiM]&yUZ_痧?e, cWdP7~+Sy/ZϏW"X $Y-0 Rs 4Ѹـ9n6VK9ŤQrAKO{  L&`.J DxY~gef" \iYFu Mo4ul(16&K<\H4Cb)odAdd9oxoBå<[S8@xd% &UDp7QI䀽Z Sl} Nw ؑ4^ "+ HϹ46uR xHX}qE PD$ Aoc 5}k7ut. +A&Z&g$s 鷷޶NSv*ULyѲ9X=zukPψĴNCcpOZJ~ăsJDŽ洛V9+gRJ3q;RRr Ͳ&4v;>yo նXʊ~o=Unpdβ󮣌W!ԹS퍓j!Qy(p_lTz 0B (?@j}J*}Γ1FI F^^(ቔx³klDڄ{.˰wF}`lTh;_ǀu6b1PRBM㓱e*rV`m GIB^R@9'Η/}tCdE(Ǥ^օ OuZn^l0A/o[ :||ͷYBGq1P5>A3GJ@u^^ ^Iґ (!j^ ,cSGhx <Y;ͫmd",g^&L[ReHR]_W4`R1xm}iוJ 񔊡헿Oۗ '-3oO5Ra&L2{Q)wWr+:{OL~3 CK?r3_?VM(3 Exr\y5!Zd'fOI*쟲]ŊuG?if[U ʊg!_LNo6 \Լ*V{W̾YU7nm-FyKE :gxpj4[*5m<0y5$vEDt*g4G^$y8qg~_+u6]2q3xvS? :zd`GxM'YOK/#H0TcXZXYm)J$?``l>_6 D+]k50THTs5.Lr+FƔvkӱԁv_HY{N5>$Z &dF {.#qOV-6 zoq!C@.6eD[{Ap)x½wg+U&"(N%LFx}2bk} [0bgR=G\^aF8`'  `CHQ[83LE边\|z`ld  stķ#m.1lDŽxi kv_Rѭ  nL]2|-22"A2M;ų/=Lt3<9'Ti(?sxa2$TՠKeNf@C^/ڞ/ #=V2xU*dF i^z^3>u/yCk7ᙏKH"Ď@`lt:;%aH_(ez6 u*2F ݂w9Q}E#p/qQ4dJ_?OĆBdfvټ_X_KB;ƖIu~式ZP)[dޕD2{oMnKyn ?  ΌBJ:\HUng"RXA|m65z.VdZ9XQCHNd]JSJ:KV)v/7«_{B5_d"Q!bUY=N]jBD#[u XGԥI'?=Ji ey2yFSxꤷp*nKTdeoEhVAכݔ`D+2FhW@fo'A'M/) ,򪧌x G|G91 L roM9SYR~'?k^q_(0TOVd'/hӜX#"f@F%e,i.$_ۺi/y&f:Hݳdg#*IIHīe8boSybO׳N݃OlooUAJU`c^1EMɨaԯ"t] -*ՊQkJn- Ar0{$ 2$ ~[,;\J a{;Xz*εaYkBFu RFo'ytzaGWΗxu+>da7r_oMaH[/1nY#='}drt2{faෲӎm2lN9EMQ"4Ng3dtw5odLtI7<@G~+kcC"@מU5~ A@jY@f~,LRpUzif a+ FN0[g`eU Y!AIj\*zzCQ2G0[YΒh,OpS6rϵ2$  z@?xvc#RDidWL0BpN'}k!{oey'hh{YKգ_m(o&4K]NGL23z_r9+s^띙Q(]՚Ok!mlV wW-P2jԓL 0~+S4^Zǝw CJ~+wwAX 轶pNi )"3Jν}~z@EENWklGN%p5o;Ж7ȘU EupTأf'dD0IĽ*8:bТ$*Ꜧxemߣh#)^ۦ0g'<6*hJ$H!{y,9HP9J5HT޽Uճl5/&OǨDB1e)2Re:si :yFoZo֡3&fXB$yNʹ}V,Kph}]{|w WdZ*nR77 el_lǑ#_ys:caC ^k-C!]Tߴr<=4FqXeJzPD\򍚧ZkUW3u2wj2SPHdm4=^AjPӧN$-%q`QR#vƻT+MbI_km@2Y_AK 6S"n+|53 Ln~f&ǤD㛰YTo+o=AѶGG__Ozxd,>ffe>$I&AlNDij*v R_"o@_z|<#ld6ӯfPmdzgamlK2L}<TFG^1)yfg7%<Y *fMxM:Hwy8KUtÔt_a{}=4FࠡC1r csf2zU2pfҐ6Zw \JN4H`g`GJ GV :H  Ry.*sdzLPG;0?*0IE$T5߿"Ѩ|V9)m9GgqnymyO5a@zh8 +jB]Wk #? uRBwRcA6˸E2WTeqiOlC=VpޥW4s'2d0*}z*7:sn\=y +#톕QJB]ye}M0Z;sN_jvDw#/\5*8 tK\%/pY4[9MMc&2uƣ@TGuQ 5A8hq)n9GwQnm?m>:_AYaAJ`?,^.z[xsf UƠ\ j##%aD$̠xYQBb\7zhGG/|*Vω7)X [LmBy8 r:x?mF Ѻv8][h-=<|ј Bd  4{CL%[՜PnŶ> &8[K$j%pvR`A [XЌYUzN>ly̨%)A.S`-l9R6{~~zOͮq =_jZ0J>ED ٴDx+_5u:W!vS޽jMjfl gH0y5O@7O'5˸%+İ\lrEx shR9_؇1cbi 6Zo wYK&Ϝ7G tĔ˸T*x#=7>TS9V_WuzzsjAsʄ)&jO`1zd )je%+bNE'4n, ˻e܄wSr*~-bwlc҉yK8IK7Z˦YX~]\O!wv};` 8?]PrbXwMy,x|7Éʎ^Y`M#u6k)Ϭ&Ww:n䴏$(۴C^tAZE` =F Qh/z7F;?? oi'מ( miq5ByHֻJsŎꖜU@$J[AVu#ptQnVŶs3V bXq} ec/-6ǀp˕q~˷S/@W\*J_uv} o*^oX>r!۞ o҅؈lw$P7, c5^kk=zA`[/^#tO8ciQ5G%5a(lrnRU%#9f\5FfV:&\M 0t5qnf#nDPpj:!Pu:/d#)ʖ4 &gP*N(Q ɤ;Vy[Ёpk+SUD9Wd[ ʞ09Gz竚ل{z pK,!% i%:>QDSO1n޸EA] ?7wV_!ZeLbY4#چLde)0|& l0a.7B,Hƺv7˸kR`J1{om:sHlIC:w ᷃jӡOC#_;7J+ƒw+=BJPĈ`'/a)ÖN8UpkU}T.>/{cPvfO\hzPz{cEf)UцrhmOH}g򬡋/&7TsX7; aF{eD+ݕ1$_Ids[Z&62%R|2kZ:cfb8fE큣_1*j4r }4Co狡מ/ Лe܄ z4A#ʍ:mf,g]jEQ5+^-YUaQ-ϟ?* HR0v0cIM tAxm"`ƪJ44WB[NڙY{ܛNbON3QZ R˒1s$KCXg"Q6(Γ__~m픇h/:#r'3(fǵ*P* ,&T]5M 1eoxsz ש7D2n}k{j`Xt-;.Ȳ$-&xtт|S SɎ%IRf R/S= 0}|y=JY^J3>b MuR&W•1ApbM왩Bh RyrVߖ~VE!HvpfjV}"3 ?5{>V]M}L{2 %i oX Bz>'nRV;B_~Tw??y?,u#'WH"q);$XO =&X&~?->_Lx9k֜f9ma,I-P۶Y&qW%;!^}&~;ā#ǧ??noizHOs9\lYnmw]-捲Ve<6]]]]G h$Dlru/،>iͼ/p [|GW]R/zYumaOcr;xG:_8F#G-fTo"'CJ WI(Ctk%*JA3BJ笩rr9EDR39He򪲔lNv9[;z ?rL+F T>5k$nYf4Mg7kD*ϖ.Kċv93hcWB@)eRag/*GRytePt4MSJ*GiWy^n Aޛc{{{ZYO<Ž+oݿЛiktלr/? MTnܻg -gnB<2aûޭ]XtDI} }dCOp>M+zO?j)԰όDq s{RۿLw˔ABF"l"k]fk =x(`z0]ym{WHr b^l\u%C/B G$ njY8jA؇]ٝYGUFIsDlk퉆8B< KV]F+dUÖJ8/mxN~ w K- UBUr X(@y"}͘x^C[/Y.Eh{_/G/r7\J!)X i ۣc^B9m;U½^g|%ϻ@tT^_V%upP_y~< &)+Z-PX<b1P0!{ZKC0{-E]ZJ?B(B?|__ܢlyE\1ձA!FsT Z=((8<,[> m򾒇%/^]ZQ 2b "8GZ k\Q~mFO! m K.5Σ>O_;y}x|/Ty-G)-+mއ&EIKܓ (e+Ԣ[;td}\L 8nK>P=ΡTo{.|| d[搎ܢ(Td.Qiwh]BqJ~ֆ9ǩHIe8O5}wYyAO :rS:atI^ G'G|,VK^ ?kh) ë\e?K.(x|F<할MV\zUp%q,%BZ%+*O]_s֌^agU`k?kw ayi5œDMDX,h.`2{\ $zDD͈]by3b0_[$Y1JӋ"BI5<7 G8>gVz+N%b,F$f34$g\<{iX!E}J*QC+Ƴ-~e*e*ym gƈ_a5[+P@ڨ;gԱՒ}bzWͰ^?> ԲA}h" (#!P2`gxưp-g.^,[Ypkze@<&?v10"\ͷT~+<|w-lAJj%չE N(;-~ywӳ:gpuWզMOtj\bR@HH3, ID1\xvaGquoi^=Og=^dLk@/1  +RM~oF(?fz;#ݭhq &Gk"<>$!ʊ eO/9NB>Yy>7qK qSgR6ZѤgT/,2Jp{\V$Zƻ+; 8ASM* 8$\7k >|T]U'ri l|w!Jd$?eЄfkoݑ`+M|T @_Bg}{bnF haypym[u#zA7i,m?܍UlVFÐcL\>;/_~/ ƓflN$7gK;%p|y?X%Y z%:Z]<_V^chhl ~_W)*3p@@`|;tժCPk%GYl/zIME!.&|y ^ 호7tE~zVLv?"}Kc@+s!pަ*5aI .9FZR(`2^0ab_@_r48=Y/whs4 n-rxK6 S=Y GBJ`W' uFHgU%Y%A1C$i +Idwx9 ^8#[]%bч|,!pFFxT#{;|m -PHdqB@(`RY>0*y 'gH@a+IpPY6}32EV2}SӾZ$˪Q|lf64l!&stq,^^Հ¤FN8l/$Iű("&\W |f;-;+ v~|Vlb,rԘAM@S%V<* nҝ %F͜2L"1R r8dL;4!Op F$0X,~U%Md>-/'EJ3. R!RUabװa܅8/h2Я,ށ`a9 -s`1EQb8fTs<* ͷ1j8y6X`=UFc1Y85^6sPb-z]TdvyعYVWzmf8pgs;-?**YP㜷B㨐 bՀ8+T 'Ʃ*uK^O{1ENyT;>>Mɤ,$9vx'l7\,yEz?q F!c3琦)b)QQjBhţp8&u.d\iť:(1ka3lS8%4$AP)U1jr%&sVBd[#׹upԃoC` V5 B+˥K읝|CL:Axh#A`f":t,xT8=`u7GiRbm,c6^"ef/CC.PC]QY8|Z\PR1F;0.bHyR Z\CxT3m5U>inMS/QY8%pgѺ]*Yrт8TZti'nm-V$1fťyxg4XՕ¸ O/|ugUI#NߪQ~6 $2CB)cE`SŢb*;h_t;nYXrVTxG]R r !"ɉk m|N\+>+/qH1#!Npq )tGer>ċ$s @L6p="^pEx7*-xSeYqMњgb ʹ.J9p:wyý-#jYF B+NրIk۽i`s $XDh;BI"QW88|~.X)y%^Y"ܵ'Ԧ9Jex2V-I _˼#d0qE$y:zp]6g0+?*7 -j&7Ee , "4qF(, j% $^Yt.GYuQ5Ƅ"xuܪSmne!vHHRd kY6qGe*uCC^#-2ñ<* G.\±HcQ`hLФW0T0&Xc%R$DEs \kI'Z,nαo^V,4 xY%?o7#(=&tsɜ6e$&Ζ6~)4/%|3\I|ҿiGwiZ_~{hBV/ĉ::%֐կͯj( 3BTҐcĜjNS=v՞q~^i,S:hC۳?K:oC_>vzT+u14v>i:3IH^7R ,G1:kK=&^i͑F8l{W$dv~V'Af>tñi3u&]8p|?6Œ,Rv[kt+arjq3-Ԥz[; +O(ޜ€u>5[Sk9Bdo1-Ŋ Uw=rH4kyH{$[HlbHznK#wnf$d `-EU*WjS(v:h˜!j[M:g$]x 'HBKX*JQ6">IP:N95=zpHVIB/VoK3 &gQ#3D鳪B5'1r=)wx'MjȘM_.)|U2:Xo@ %&>sS#]*X̗!*i9'e(q*;~I  6 z̠ӫŠ@ӨE6`Xiaq$| +!G^H>d:ЕXxS TxBUc>Â:h̜z!$MΉl^NGPQV#3A Dٗ{12s?*X\ Ѭ4e@AR ;6(0y[)@o#r[<ĕDc)R>L. : 8.8[igSFf')+LʇSKhZy8Gf=`OfIqۄIZiA 4eh/B/e[,Q5k`Add Kڠ-_**J ԂZбzΐH2p##c渵mWeefP%ׅ dpY'~T<-oyKEw*bJ bМD2؄2pU]q`VאN[Er16 sW{[ OY<4ˁ{fx9"\P^'$"~:O}XdDaJez}McK!xSZ: ƸO- %B* vw8臬G#/OE 6Qw7h> ǾB>8Ԡ;iOi9N5=&

_\N?m\xxqn1M:pDB*#!X L3u9?0SVhĒG*N\"X4iwr7}{L#3kCp(mGgt9TeNH4tXs1$Z_`&;o,W!vWA(N!GGYx | ^P.пKx; ,<[x( 4ؿ~  i _KNo\NX=Q |]Dyrl5=lG6k3Ćj؝kTK.wIn]2qH`4!7;5\ڼloVY{1{kY5i1:U^j"Yq5[%ŝ]\^yo1,2Ymy}h ]= #NJ}e֙] ̗v+߮vf Q)9 }6|*K阿Ǎ>/jPx[Z# T {"s/7?~:J;TD$׀NGYI-R4`jiʮC(в3G6g?IE6 ~3/>_̐ ( on^E爀A)ó0c'!t-!to}_z1fe׃y1Y(u^o z?,*M )%E Y$B*vixznzawZKBūl[,϶^gz/?_簼CM/V'׫ ߬ɫo8^.gf1z"ci5Ҩ0z؜^u<~5؟ˎl{ bULʫof5f7;7f}=byv^~kfhuuyɉ%p:Y$NbFi\s\ wSVb7z{5g2:Y^utKVkMrifu#"08*6FJL,vR?מy8~yAWR&۾z޽mghRd#ۮ+6CmzVTF|NfdWadrzөY¬^oڬՃݡPseGzeVF>e7Ke՞x>9-@<`suqC@UݔGԝ*! 2qV4DjR5(-xjMbJRbihB`)!y*(2Q( o)M)#ILqFI/ ~)gL^뚯櫪98AFkt '&`NäKMSH"N jP5>֖٢%6_E_U*bG;{1y|sܬUc}橸n"`C4 De0IqJ#,טR(M`#ƱD , G=/"ʢ oO}szpXm'{_Qƶ`hW6ղui_,T[ѵuB0wfglWuL:_/vWv.~$WKYKg*M7Yf6&>zz?+?6L̑`|G ԁ9lu4bTDl*y?b 7U "VErEw Zb cPHj cB*%ZkbBJY$*ń;_PBdv 諑VH;vyJ[GеZJ ';h ҂JG)"ɑ0[ȂczR@I)iaC= Dt1Ze >wR j3&vZ* z ."!dt´/mхU뭩@15CjgYy-KjmKt|HksKKutڐ,oÏH֓8&Oa߳Sn،,$cFg׹Q !xƟWRZXG%Q1O|q"`嫅 9x5K*(&MUz o?a@^EɄ̊hP ;Ql""ARJ Vr 8!Cj瑨`Sվ`_/ta29z9Y S^ɰ}jf+/{b{){^O}0F|]N);]^EwDb#C*| #\%KS;GHن^moV˨KBHKn6b"8L t\{+1LG^9N_%S"c|M%%8SXX5x^/ZK>c G BcJz0ΐH0SZcӑ1 sYO?]ޟ\CK!NJ,]?P:yb kʤpi*"cP`CqФ uU0Ðơn\53q Q5UzHp{״!==Rʄ!5VOVWvӡCwSح%ZK#:5Gk-1-(Ѱ'218*$ho;0)"O#AWy ]W p<X!;Ј|q^\6W_:'֛n@m?H ~RxNͩE{qqV E[n(hV<\Rƾ|F:V6q8Ni'KJDz)\__َS”G;Q|frV@;|Zl=}%//a*cL;=`prYmUm!}u#t[SyŞbZO՟1L<{!mlݔn!TS`w/[w@.GMl*)#ث{=fsiׇhӐO8^-鐣ǛA'_M iM?Tƾ>XJŨo'&ۣ' ӄ K%RKq ‰TPPF&1BNHm~7zjַ7qtڲmO#63<<,4ݲC8GWވE0kS}uGGGS˔4D %Z=b:<>IϠъ?ce̶ŎIs!Nv_Y7?WLR~17'2_t{foaښ+_(|qQU^M^\M?osQ tW>n1^~ѶpDG([TΫ #TfM"k{r\U\<oZ UFxvjEm+{_[m$x֜g"j3ijCS U_OjR jtV"lW{}Yu[9أfA?{WHQo4˼E?`2[O[U*Wu,,\v 2q| 91~iyƫBno|S˧^ JmroAkJH;?٨KGvcIW7\Y\}C %QtlțMZG`2Gu*G6އ fl]v{^-fwe]^ZOv策 &~M6n/Φo%ئpukT-ӛ%55ą߱Q"҂X a#YEq2;s|7'qI)˪-ʰD):8R=3xDq/IףsiSS̭Ⱥ=zuʊ)Ņ]`c#MW ]ARK;>@h y6.t97/qYkr[)۪d7-8=,TR̀m? H}CG}}*wRk}:4Llޛ.`4=V*gV [ rUN1h)oE1Lb's=ӁpNd) j::KWŹMÎgѨK+5VSB?^7Mv؄j7m ;⫗vEwmaϼ#lў>Q`a , "EıTJ5zd6kO@P!Ԉߦc;?qm; Ip }s/'DPp\#|R`qDX,rOm nG΃JJ̠rGHW9h:z,=Zպ5ȃC[ȣC#;/z|0p5F+泚ۣ=g׍KT/)Zpxk.p zVď ^֛/&YֽکiwxR~xvJvv=>2Xz{Z\`-ө~,)#ƝG4I\t\'y)w򻸝g;w.!0 x𗓊bXY%IZ9c5NbNt{,~aD]w =F!jk2'*^#&ÈE0&'|@v6g* TTԞ>g*0VpoyU٘q7o$z$5Z*Ak!+73|`g0;ӐJFudYQf?],7M5+Qe#'9.:T,J˧p҈ B̥uܼZ6}VKy`vM-9*oJ_UUZZ3[d|hyv/Ⱦ ve'[E+X~(B@klm.(Fؠ>CN yK'hA;lR`Vb;Ԉ2ܔ^h,Kf[s )K{WXVmg]6Ni enfD;&o3"Z'~XuVX/1m3ӱ|ˋu8y[_8Z7M'(4a:{iY˰moEWW+iL2Jvyϲ ]kqg~Ooܛ|0e6T]|l[pcS q)JZ )hn|S\c%C`Ӈءx/1wd '.L 912"ptuZ cs4< >c#GQ6 2Sy3ޣfǷOè \[-1(A+"BI5*W [\=3 bך>9PY\U-z"JoR' R[M{Zeǘg IFZ"B4( sJCdfs>Zfe;yuOO 2 QKQIB hڤ2i8QFE ˳>'5IὕR F;qɸSf;ձ;xi%JM3jt[fES3i40(m LήZV\1|k2FEZMb%j X@hh0[~|!, tYe& op p*)G͖_w0xl~a=}!_*&)" D5XSآ@<i0u<@vƑ.wRFcu 2icrE$28Hd.ހ~$qh"HxMdzoӱVHXDyd&R QrM`/qpB0z䄃tx IM5FYm-C+IBJR XI1o7VQYmwp89s,F:Y`s1 mn`>Ts:Ѕ@-"Ĭy 6` C\ʇ @;'d c+̣ј@s-4ïY 8' Ld(r>WG/kjP<@4J j۠<@[]iK_]P?\&)Q/$b.٨;sO-3@n}uq0oa(Hr t` A( "=}Z9wiūEpe#8)iu;,Uˬ5F1 n.4>6r~hRbؐv($2W?N}X‰g[fmp6㓲/5]̭׬#Fs"-@]@A˜<)KWqe_&inH7q?{h9 /pi4Q'2(Ȇ75A]RxzOzDHkY m鍦Y0=.σp82]yQt+hO^՛>|9#Ҷ#r9LWG-~kF^dOң(jDZ]O!_9W(i. wȁ<||H+#;K>0+POf0#Rc!Nh2,L$!2C[ų?㡋`藱L_Z}PL%QKgqiah/ß3ěv?5֪q e<XVS1 L!&b+%NC%RT`.4++S6kS8B|z1=mro?ȴ:&$uJŹ{rUQ80i\`"l+'Y~[-UR(>V 8kLߗ^Jy>ks k̿:5/:X_o?*wS h6Op6/jpEyO `AyFSҒ*xyU\ƌ>d9],2;珎~~AnB'+v~}YZ>Z@_is|1OX)/eiJ R(PZ±ZD (.'cr gu Vza1t̠Kls?{O6_Lfg1ȲeeB=Y}2 qh]'! I03YP/8X I۞`JI[nm %qx$?9=Bϔ"&.C\gDHպTi ~`~2f]efE#6#:Ptb\;:4ͅo~W7ҧsFPqpR5Bf7&ىR@e)pFnUPܨ ;W1, hNH2 X U=1lGӲSgqBVgSG'#q"abM|v0\Vv1 \%4E:V\eS,Mʂ'u]k6Z7@'50 w/}S4򙙼DiK*1e-zUx==ߚ4w*Z(';0J(z.s1aGPRbFXHE 5Pwsy'o[Z"%.bPvwR벓@O k<ͨƌ|S2F,wܓ{cs>Ƕ {yŏw>@'0KSԅm nFxxK]b ؙՓx[lMJs?5tXr16R>+7ZuuTfotVF/?}Sn]9)+ot G0DB r_rWyB%|RL-ЏGCxS:/ 8NT8KeGRH8q }aN+Uܓis PR9.v%}8]B;Q=PF Huz ) VtxRijiԪF " @H(c!u]B ЂDžBЁ)h׏OUۘe"HGqW32{'}F}b"B)#G7a{ UP+VlP>}P"w#C8 pTnKojz !ҳ"EB RRv#8'h! ܮk4*-,1=TXaF5aPa ?!V0cC)"Wpʶ[jqLM (JesaR h%W /&m x @! UC)qJ`P$!p Vߴ6-MZr#AKo.|_::gWj8+K[$qVi58lL K/kͶ[-m)8lY]=u0AI>NaV <G-8KrQ9hK60VG%B%<aGt*F%fKBx Hx+A}pT>xnN-:̭R]GV/7JZ{p\;=lfV}[y],t0X0 :C= $Eو O DqA#@'NJ[MX_?)po`p A|hm>ZpE @,8@(h+a&OVBJ,v\ |xḆ|#p* ˒Rq9TpVwũK'C'Ϊ5mGueCE5&@/@t-+g*I;qxa8 Ea`Ԛ OX5M;3rW(?5]ucLo;N:/efrѿ*Zlκ't;A֡KmVv2;Gt#dqegXr+tYR=[ل=*L;7ZlBoIWZjv6 ɯ0pc82 ڃ-ҽtOlToቚo6(^)BSm{s =D01Jg)q#S+wlN2{ wv7>ϴnw;X@۞3 +m=3k=VlxJ<ಞ߲nW;"<2_,N۳ghDONNmvnld"뙹Εy3fY͢p)@z%\A!~ruC8)l YI+Zb54a6ǨԺ`יEF ,TϘ`,64X1Ž::)`Xw=hq #3al`8Dֶn+'YXNҳ>4 +_| .pXs9TXq):&N+UkavMO?lCXF [oO릯2dB=MaGѣ́@~o*sj ۞6M>U*7 NlӐ9!v e4bFɤDQI˹b8HGѤ"+fV:+Uylu۴rTaQH\?z#gc|g2MmV؃e܃yNC|6vNMC.0u\~jgU=H+ݮ5Hn4P ' Ǥn#fG) SoڝOTBЇ+ Tx/qrnN8n)qץ {~sz{B0^in/[9ئ?~pq;~Ov6'v>'k`EX'WZe[s/L@yFsZPXgN~UwF4ʹ?k1Ze=uz3f2fǾ/pw"<^P맡?aNy" аl M S5 v]Z=ϵ>Ct?}dqa5?y{|[46h;ݭN ;R~gd'5ADo 9:-'&+.s (N&뷪1i|Ly€HNb ,[UX.`pu ?w,^+ORF*7a .i_]}T_,+T& -,-{5I"j=Iw hwYGz$&0g˄"R{#4F(~0EUb%~ mm/B{^W>>e}Vi,CW!np-8 Dm%Z`^8tŏkz5ۙPn{+Gi.oMV}6C\w3)Mz>O޳/cw~A d;z4"6$Ĝ[ (5gsSz"^/E,K7V\AsYcڐΰ}~P[?{ |lX׿G@Ne?)h]Pۋfڌ JCq~8r},@FT)?D`^:ćJR"׃!ƑG!|$ViG]۴"N ޶Jvy/IH .b! 4"BM^F`r( m=HlٔQ? =Naz_LXVK'1ׂ^ XK X3x8W7tٚQg&&md 3qN/&a'/< HuԶ 2*~(#N> #QDBd|@#9'H"`smoRC\'xPsld@ ]09 `) |7Q˥)9|ac.C0ztc&?z>>~q)n}yg/Nӣ/߿(;mnV3-iAor0w7ܜݽ@'.-7r @ >'6]xTc7פ5uSB]-iߖgQJကͣL_% 5f6j3ҳ1\aN|. rr=7pʛzpo+ErɻΪ$$Fizn%m!ȶ @} *̢r@{l%|2®|p=/b3y>>DyJ224roI6HӤQFT@2@ z9;YÏ .aI~Vw$Hǒӓz.J#ȆUk] UZ& *Kn }=i. 8KwhE"V$or$7h=4kZhQM[d\9ʌ^!]IlhL K[(fcf.|+=ݾَ"6~5;35Ͽ(Jӹ,f"0F2tȏTxyk RCl_ki*Q!--!kRʦrUzvW_pcd,~f<<\ DN۲f,JY[;qS]9n ҚJc q&3: l]cZו懵xմ@ņiŨ1"`ϭҸ ixAPFxo.yg.ҒsٗVbJGfa6qftZ"8LaKy~4/Uω_vF_+ڜqL<a6hāKs)%vN9"g>mKpwZ<h/p>a,ʣJxH#\IhA}"/|Ƅ!-mI%_.Z&`r20`'EO3 +;9wqRF]׽7JG`ȃ #6<˃|Gr-#uH%AC2NTpń0">ք(Rij @V1yuD[.(ݖ%]e86-5N■S7x/o>zwyL1IWuxaN(oG O ~b|~z$X;=䈉O?d| ][z-.hZNORf8]#H>A/F*U^놫Lzr: /Oz|o^yWĴdTlcn^5tǵR}-ާLldSrPR6n/nr`kGxǿ>-=WF+|tB945RG뺩X @t3R\/QC2%OLI*->? fR~|'5Zu+ՙR+ۆ|a7Ah[ک~eowe,[dGivֽ0;s}z :ós冕$Nΐ3@G_L&a'o|?7@׵ǟ_v aٛG/]qI1~7z YUҕJb>c>\|0Lׂ | ?B}d+r ~͟?]` F)m bʽWfG3x`?*tj_nߍq΍9*d,yUy3ez$7,f\eT\v@m&63A2K!Չe2V*.˜e;K"B@2vzԥ9jHS-\hz*sX}wrLs$χR4QN] rC\iGYZ頞ZQO &DK:&F_c58}"d/gR5S$9&"vT./ץSC/Kfu'N7چ  o%@zta [RVΝviL%澜{l9D-6mA;QV0fxuwk\= 3^+y޸Q)hA v]3\d#Nd^1庱>pFX(LI|21DJTq s 3e;W6Ldێn}W+ݜ&uֳiU:*VUrT+&< HA` A(&rH0|> E˩ߚՊMk-~XZյ/(˵۬znsgqO,Y&sϗ2P#B9Fr"ԃM {:6V!VA Z(3bCZWY]H.I#_4,%((%6͍h/1ډej3tտ L5ހkR7B~pEDs(ߵ\LQh, [%^Zw LtLe#|7`hA'o:4:X$~;X3'g/0 ̾ŝ-@|}HĹ4ԾKTʕ`t 4=yp`prJ(Iuڗr8kECd6a/R˻_ep aLhy2&͆]~4/Pc uOG~a8}W˯I-'ɻԼ+Wa(3gBq(x q4~R2,LD#<"dɓb~ y~6Y2ۯIDe|YY\Lp3zt ([cЭwŲ6Aa B㙓c-׿9L(-{:r> JoSXzm>ɪ@442(}J׫>=` =^$EqIЅ ħDKdf2[״=.ݐ 9pީP [EvrnfZVE.FVF^>,ӾLh-[qREwN꩓|VH7QlJ ;,B'9b¡tC u}[qcR)u@*RA{e*ק?:9\D!wb.5,8v^ڜwp@h?[. "\է7]J^8[Cq~Q[ mr7 ccq8; o$ \LxICsP.Ձ>#q4$1ȓs@Hՠ (h!6yZZ)%x=L91u2 }Eh@8P Ŏw ΊcY`5搢i@zU{s)sTƳ (כ&]Yvf 4K4mzpJ[i2b8S%z5!_`bS1W)BrzCAnZ/3ER1OjLU8ЈGUb1szEBTRLw O%&DTsUaq&]Io 9nFס3)2W#q,ȍ\XKƒX2 !p$u#A+WuG5G n1-imskkf+pS++` ~$}Pk`DBaA#cE&^HCc7؍q}е51;a k;kFXkE OR`@aD4#XG>P ˙#%I.{ҍvh"Ef]D;)(ڇ:i'e"<^4{gs%ÞBg6ߢ߲-N[efw=ͻqCնo:OHI4 ,3AdW4^Sh_gdƪc= xNcuཛྷũ,x.f58gq*ΰ}ơ1fW0D65}Ȋud%'{#b5b]B]\+x,rZ`G\6?"?}W& )HD{ZhN3@F|ߕuyɐgڻƵWqvlٖɺiaׂt=slːLh -yD!3Ce[ڒKQ?G.^ˌ"~I-b}̶dm3/q-o_v]\aRYKE0]b]N8CX$eFlF~4,+lM0:b!2SrUDg^¤mȫgӃ)YR׎gWu{VH=z޽~L/}k6i}YwlA7,e 8K9%AhQ`'x g$8,C2(4mĵfიG0<}Y!Om0h{Q2sA/*TЧox/zpG9&F``y 0X{FH(I,l6fu3WZqCN?WzmF}ͻz #8`Qޭ՞t7 ٗ4q#Ֆ j`׉ʴrYӽRyyMYc;qEXnlݱpu/p4n^];y'|xn;gǁ N{s_;>h7yQccCPs|T ?g4?ּ۸W,(~қ{I].[.gRDE 1e'7f1Ԅ XtE']< Tz!xDpwQ]{!K}[|ԏ\^w+/x8LӫeY󡇸Yc1gqKeP֤zYqgYELsz%E??xZrI~SB']DbTId^KMFB|J7 O$( MqeJtbݜSX]8߯+h mG]dN_@5+\ 'b" 9gB26GVh}pZ7d!+foCMcR9{wMY7Pxrw :t~#^:AAïMz>:}tt>完##YVjNxN~(Ay$`C-DgZ:[jXmDjg>ꌊa3/@؂[;֪7&}nEuFK%U-2Tu@uVjpW T^ї$v%B ѴVM&'8>afueu9J(}Y*xSiAdwU.g@(%ӀV@x0 ul Wm"nPooC2:ahrP:-/WK6>`vFCk*i kLY,վB+)Yh1m9Ŭ8L/R5Zx|Kb g~ܔ+IJ/^6U:f>tuON~AA8 v {X9yUߡDt!_# y N8X[1(zɗZt?'ڲ>Mmɳ/^+^2^ɬʜ-ʄN:ZtqC{,_ᔫtIdQ_O;םG'{Uz,X[iK;¬3J@R5PR1K^3WUp3d)oۣǥ M2|}:ީuho?{10T4"bt=Y:%^;XU)sB&~,A෼=%?7V=]\ŹƦFR. Suv,VFQ}s?>P],)\<>DeSx4'!I>I~4zb%X [e (>dy+ ptI6eI;fzy=2&ȏOᱢ+BZm$skDYぶ?fY:og<)t~L|> չ9=ȫޚ;ᕶy]<5L.:b.v.KWZ3,-2,)uhϒMɛ:u1Ojޮ9ٜl?i6'5|pٮXkȒ+wmj3tqB2钻UE T-t^yl,اz|q{]@%$ی50r݉G=+Iy*q7N:1l-M6wwMd5cY,6[ѧ^m9kQ¥k m稔z _^d$ =]kNE'8 ->dZ8^M g>ڈaޕ~ gbZxЖ aU2hQ0IS/F} SPÈOQ8k9?cv<АtuAKwIl<Ǎg{,FWZQ^K ߉`B1yzp I_`igg|+^ݍHb`b1[&>] 6 Kc kQx1 mfb%o$k%֝BKYKmdcġn;{uIJ\,o aF6-+fo~,`0׮a$&0&ԏe/]^L Y# ?X,/4v;wv@U]6K݉cTq&bԑx )XE{ynEk4VG$F9F+f'FhY#6ML#M V˦}(Qݾ2}} ʗ,e _DxRz䱧V/59 J*3 ʆX;^AӰ2Q\ GVX̐!xHJdwџPHHncA}j~}3;ƫ*7ޝ#Xz)ܳ][VUށ^/Ia5WDX~7HdT`f iV> Vtxmhl}CCJ_ Jɧ,y]Ȓ/WkUXYz.|,=`D '9ϔ7AOˈB%x=a)¡ Aȃ]fa;5?1n{0wc9o ~cɁ!V9Ms'{0 Vߋ9>"zt*yYrj`:^+ksS;ѡP֞( ~Yynjg8)#)s%mâQT$=dyP2O_ YP6Yq{hwx¡Gh R7Ss.u1!yjQNNV_[ޛYWeNXy+ ,M6uח/f1{)yz[Muրwt[3vG(Z~<4ueyHg{Y8FLQI͐qbs$6 INh3PQ72#?Ŭ8_,<ҳNXQ~.?s;rU EiG4YYg:Hc쒿Kn[ցu`l[Ij #8ҋ,C%~u+"Xiu5jȉY^HkڶG8%OLX'q F(FVz =l丶sp\= ξAxถif7Pőu+Vڊ&Vj@/6j z2RyG&=-jzeӄ XkȂ8~VKURO>NU={r3޼ gZ,mwf/G,={)Pu9 B0x^Nmt϶@s wO qfB!C2(4mĵfი&=sGC>' 6MQR(˰F6f=SԮDP2Hr~k^* ޾+f1ЊHb:gz~y)cBd a&ā&2.ـ9+sn13YbgWn5;8N\"va.!hB7<{j!y_4ǝ/5%y7?0(<$ov"$9<14_Ӎ8o{8:)3jf FG͂&kFUbdjLˮFJ&u(3 5zYNB9g3805N$"@&chv合pGaȉL)"dK01KďL1'M,{j9f!$۳]` Uh9e΄ׯzH7-]os\+j_շt;"nƒ+~FyzV{%N*P EBi}ڸuU{.5 ³-%Ĕx瀗Fbu Vg]74mx-4nM«k,Tpq  Dk6cOսIXF^@)h^"ƽZ(cؾJ~JKZY*[M cq!& #&NWJ@Y vt:$뺊H0bY/g[Wv[|ԏT#p%b 4٨ŎSCܬJrʠIm͙qɗQ&i.RGQR明+P i :* +7D˼Dk%TI G%jo갈+(ݽjpv|=;$^i(8m N[7mH8C Ģv;[( :,[ٷ_z!Aܯ 8_2k:d(i<_Y);d$-|]*nqZs&ncO/   ФSss'nG<>&_ڱ[v6Y yh;qHY<.5#p&{}bqGËC= cg3cNiEIl1BDbde<1HϐvUgC,B]x~0"+F?4,?k2zvrZ=>?X v>W( t Kyచb 0U TtmIg *-Y+?oj?=\h: :ΫgVңӺٻm0_릡 d]V:4/`F:Y;:$%kQ`p }PRQJmsEv|4XAFP+T(>USR`iJX:η%eI붔-]h>∧*+կUtW.:mW/FHHdsSqڴ/ eSѤM~NBHON/I[\ǼFd&*!43mߦW`-4y,OC_J;2׏cEݽh>4._"lYٯ5.4opa d ǬF=ɮ9Sjr [ݏ| (% LO#Uˮp'C&^&AYepLfoGgoxrz~[ Z)WR.g7Q~L]dw:X28ZioQ*@`k>\F D?g*"#RjUQ;B-;YR>G HjN69YTh흙x}%`lC`d筈 at$J?:Ԥh9`a^Ku5Y%l܀sCA3HUnNyHTtb $ӏm)Ay1jA_& HeTۘ_%`ƀ XUu5w@pvv{aLݳ6y> eEnP.1TVGPGPwA0kQ*d0눖JʓCg TRUP )jRNJ+{i_WgOa=6's=>#qը]Q=j?lfغw>%!'~ئ4UK! b2 9Nk&trYԞq)Jm)0_čy`rQi_"V/::y.ynkߊq-@hgq.';1t'Y'(f,od2\Lr 3z:3U;샴ަ62ZC4o!нv$H-( 1 O"98]sv}qc)}~ԩɻ̯%f҇ z1(鼈a|ϧ=+LgRds`9FbD:###YќwPtO7C۳ 0\骦4գ.t(75n4Xʶ1y$ti`zŸࡰuu 5bB'~;fhy`zBc'_= n_g .컵#͠(}C)QsdZ'}#cJ{1 o 0i]ٝD}i$_TD\帆ŀ|Ԏ([uCA3@Ѥlegi1o#I@LLNy+0psi{$3);8dȼ/A =GO4ÑNkK2;`NG Lп:(owN+>7ϝ6q[II7QY4bc*z6+X+sƒNۏ,j4C1 x)Y8K/'7N<{hA2~)@:p>s%G6FcIarډ tGC o!)E+TEE ;U$Нm}h܅ܵR局@; 8 ~u80Tq]mqC#ή9c31tbO8]XnA }& tg t g tc\8 Xp +`e(dErџ`D Q@'dz|1m޷0;[x3㋶Oӟ??>h~ U|'-eq'^<2I0Ym{:R&;_ٛl#air6wsmHBYWbΌE{wsXpr[sx\H\^dbɥW2WP䛁AۿCit-0M& +a9xl +pV_zo;4O5BixBjG|<4I:d ugXzkh&qCϔ5X\eV?Lw?vqsg cmù/v).OXtZrbd·4 >u/<uc3F@N2[vTje{ZE35؎0l$G4FM,[Xz \nX053p X4q,θvsykuZ5L:5LteNPg\#!Qpװ 6+U %m  f﫣~ k9Xߵm_jƷJzoMJ4ZM?5ξ|H!|#Y;׼n]U+G8%fY'emy5h_kz矾$gicS%B#3!̐`6X.eTpӳC̴VsZQ9*]uipPummXwI۠-}I瀤 vrv_9kikP,3;B/0 7x(blBw\"l# >mjzZ/"n} `/tL e5@X,aySGٛ"zY ,&5l0eAO{fm& ]l15anRV_Ώ1k0xsv cMrbq-NCЂcq}UW+/jo uAAa0Nզ<¤Ir-5]5) qׁѪ_WW b4SuӶ a[$J7`vyȩN. C.XuyVvvzn iFaF Ql۴ |L܂W`=0X<+/>(ģ&J 7oVo:FYQ9K&Ey."q"]Ihrm ZRlLEEeEf@((,hn^+tpL=?ZKj~z+׵"f,.%87Aׂs„:! 3.4s-^U~7j al& u@yѐ:3TlZ\vV-Έ;<#`V{K8 At 269D1Dې&N-ݙ \~U < X|;"ytr}^v vZ;T<)1/~l|^*_}N4wql鞻3'rG7ks;3LQ }MyX?7n3<`V{7A%d(){wHϕz&;:kw1Kve[{w_j՗P^I]u ?Α_do}_,aa ˠ0-VAos.'虇4XmW`5ef VXdUSKSð;HмDM0 X0e $ .uA9F,[z[N'! Mvc|?jќ^TiQVB #PmuW &A2.ci6>-Yy`(*CZm5Q+vPrKY г5JJne,(lDB&B=q0,=mz$ |ɳfr>zv!.O>:GGnCD1i)d"=hA=n+X|_yqrv6aoȎ&˂}(IUE|,Iq5F6+ M`@^a{VEG-<ͳ_6|Mw~UZ,y ' ה^U_w&ߩF{|vTVjsXCma jklܩE2ecSF@UG=X+^"rp'N'ʓ6qJ8xL£HcX7Bmgq<|/C%~ֳ3U.3O<* /#:Q[Ϗ1юFq\o+x:saQuc1X'r-.^ S'ɛegcr9Q\N!v#ol0ReeT%npd;!ݽ]K2ܝHd&xuꔙT_`4tHٻ߸,% Ɣ> Gq&N;BRǭn[W$}هl3N$^;hskhzΣqJk[rnozr7+R77_8nhg}95 .ynmpܮʃr\/<6V9-rf6xǵV9ny͍vPV7x߅ ?=ӫ_{&XCL!uW] gtqӮ-kV̫+>*Y6%s2קtR&lg)w`)WYg![ yi[ ILbfzFqV8"=)Ej(uZ;aΑf"ДцTQU>P;cLnAM:N[N-Lfv2`74rzz43iKH3Duozuws]!`e4% ETaD.Z`Ea.AI( ,E5hOU*DHƤ13l#X10B"G`x8bpqym{[Ydy,0Wu5{gyV-7Հ&~~ޠRkg8$rX]kyх/z}X[8nlz_0 %z!0V?WoY(]$6*7ʥrI(2cEJ:ƉˏQ?ҪjnP2v.Yx-iy Ŷ{ ;_}5ҳ~\p aﳃ:} h<:, FǮ4FCЎyzo z7 Y):4Hm"[M qb0>EAsQiXی uјhw/o|PB"V}vr2 gga\]n34r58WSqyRkç_{k`0\KnLz2؃J-0 G6m!Trܫ^e)u8)\g3N.%pwd& L5ξU@&V }w,tmݜo j~gZ^):>U4X;Benqq& K$B$R)drEz:oc \0H<?tq D,L@;6x$Э@8lE5kEgW+F 7拳@R=\.bIhNnF{o[iϼk_@zߦVzbVz-RJm/mҾ K6Kˤ@`&̜=-y\_&KUYpV̍ޓV[ekMH [+G[ܣ3$IZ.ti)WG }pc5=wCsΩ2Lk.0dXraD]e(9Ϭw59ZffjۻZ.%Q?-IITzɰ,V8D 1D-!:HĵZ6N&Q1`G D8׈Ȩ,X.ZL*P*5 T*5"q gˀZqZ)+/Tc1Hݑ sU$a0ŷb~|fA%;>g@`N5_8AB!R,p,|t9m櫑C,5~j=8rs>[x=6}OŅyɼps0 \OA˹z6s $דq sl=]4uCX4v3/,oA| LpqEo(K6LY::}ʥeØ%?#|aܿ@E5 wecAy9sso~˓~z~_~˷OO?9} XNPMC`h!?O^еjko5Ul9n ߦ_+7{C}8a~WBɧCoRis3R(l&gf3?} j~Q*F4M\1G%|tTn o:* &NxZkV,q7;tS{-SıaC& M9sSM~R3b`6QxQ3]<^~#%h~FswSibҩ N#;)QN&SMFt0cg3t oA2B`C$1%ZR58=eyULXEUʂ;D+0\>Znd?&m zh==fkn#GQGWCJvgk4e) _۬'T; 'S; WR;V^;i~eE_{%5g-fD8.R*3!Qn9wD 8quUIVܦs@3^ oYlcĽhMDi +&E% kHȸG)K0Rql  -&n2l"L*Y^4ۤڌH%dK#teO'* ]6>}mޕ XCN_Eass]r!M]kfYq6 ݓ w, n5evZZ"3j{.L[V;^ ^֥y}S1y: yX  }[SV5!eÑ[I-lZie, nkșP@alV~e Bkm$WEا׼/pI8I$v5W=3 bbY-mʒxnͮ*~b}6Xc-:S>v.>7X??xf힙E[J\o%CԕOy, Hq ?rCIǘAbQħ Eo#M$åٚ&끂%̋D$U8%"x(e;zP~DП\<-rDBvěL b>eFJJ:N.`!ZMŧ?`k<7׻Ӛmy "Q+s5NLВxgB\xEOGٯ߶O[e嘛}y;+ #NkZ jRȧU[cͬQ?ijq8Zsܩ}F|ÚګqaMJHiJ/?C8kRG9 *!Rjme˕//:2YVf;ڽlF3lOߨF^EqC3ʎRA4o*с8>i *Cl&>_3 jmQTm갎x^u^?| Z!zs,cvi#}.QIRfZB#2io!h{!.p-}yk+DUqn<]nov9$9#h~X X"L4:Koѱҗ룡QKo 2E Aڔr6FWf|hsD?X5hwH%U7}i29N"4LflDmtspY&qNRu)88t`7AQÕJ `+ᢨ&mE do$:gVp[ʶϟ[; /[tK*-Qz-qVq|>~wtȂPENLV^sg643_i(4/WҸ˜n h%3̝KKsp*t"4k!BIíTE:( osD$c% FpohZ{!BfhS̳okպsح^;nU\BmLB!>+Z^5ٻ"ҟ$H$Y`sL)B+sTu؊ҽӦҽ NJ^Zv^ttvH{g 9ZR=~֬(Cd9JrZ#d:NU[*g]>\ eGc^JYPkR^]RS.sGPǜ 7ZH*JihtY$ʙr*&Ta2IG`I3AYYS>XQ c尺(U)tt'/WzE7"y.E@հmW81c;f$ϝڣj!Wʛ~4ڡ0.CqUt%KeK+_uUc1WTZ/dH4rH)hֹ3uIDnAyަ$Jvcb[ 3G+\mayV)*{/n*6C0A=&B$J15ADm6u< kȷaeOw+|}V?0!b,wiup|ieG.>Ѽة7>G uaU4VطuyvžERܻlsQal٠7ޜhfK:ڐԦ,49 0GPp*q^eYoej RMw]DrqآZbe2tВn硦O3}"%f6i>elA1XWmRk%oҪ y={fRxw.]ӽWfkls'] lR9s] 1隂cV&ŰMz٫++󥘘ys2=N=7ZOǫ6xԅJ[uD t2ȭ+r{~o+,QPhJK- %t6R#Xj R$UCZ٠Q4jv]E6.V?9:c N`. Yt~ӌW)I4yJ3@#St =F(! 'pP %>tZ*+'œX4YH)`kWh)J5@=.\AZ{2}54ZVA[(ŏh@T!/h[vX唲+5RaosVVz C)kV Njy*Kp&un9>s<= 5C $ & Ly-4XdxJhKGGxKGMuP9Q x|N~ms$f\R©tk 2\D06j{jgz)1'=;/464)iS |{G2u?|F^XʞXSD93P\9eBw%-MSF[WC;aZt1p?҃՜=9RP i%Rd4WjR2kYHW5)<@f-X>&PSơ•2*g\R &8YXC?<4h+ؕ68ǒrБ]Z탿l@dru2/rI+tT*DƩHJWd@$ mP fOaAHT\*8O+Po5Bi.pA#lƸEYdRY՜AD:Oѥ9RG/r9U4g>ZSY LN_FQбAM4혚vv8@8$QmLA\nkoUzvYfzMD1WKjm^50Yͥ.wj4dHoi8 ŢqUfy"pT/X ( _g;&G nݣ'4j\TQb2e$ &.u 7tvpoT\nhSEَNUgaK`ǯ~o~m~zwݿÛ~v`f!5ЃIapkC\Cxb-8o3.9qos'.}Kb/~؟ yf+{itO&ᢟYRC WU{ڛۖT}*rsWT! vmMwUo[Zi[^5v8`6D!J3q%Ub1^|iҳs?9nڔ&?̙H1 w Bʊ90L/?3hq~'8Ŝ&&:B$a=Pl4A) >v6cͷԖugyU?GĈ$)KamA1OѲӰڮۍ;pW gsu:iuHY=^ӯR/Yy'֐cmɞ+N'eJ&Vb}wm%gsxΛ_1a(Ry*+#_KEڥ*PB F#ѫ :+ mQ-de*wFSuܤ Ĺml[s*f3=AM4X'V-˭RuۄZYc6Li_c:Ss0%q%j]o`MLI:i +0XdoIFN8%;$r*M}hyyrDǔ;f99gb@{%"uZ*#fDsbVoკlOLΝ RIMO9e;n뎷GMj3WG4}{^mШlte3D8rDi1 (g AX:# !ԎaΝuӉYt mA dkB e,˰۷2NO("bQd""JDP+t3x:NdU;34bG7X0)|aeA?I#=Ftl'TIhyq';ڬ5J*1ZQtT`D2fQ@ bh)1!)jVK'@k`6F0)F%V!祴QM)R$U$Z2iQA<}wsXiq5 .PmWO"9;M?+9mȢ']*5Xم#G`+el>3eV<'zd%_=ۀ6|^Z)400&DO) s?*U*xRJ@p>\ hxLWXM qw3Uj/ %tke)3ǠHKh14 Q>d(1u+("x%FU扭[ޫlsr=+~c}~%X@fz^=ϒji.XzXeOuzVr;q;CSxQq?0ɇ/ *ri sIVXŊ;u(XWc>FʩW֥$59Dyi;2Z1=bgK,a0m)syCp$eLqFY,pL_ro8.}ohMӏ:X-rgtm'v@u>]Cm&lnXx%v8RQ#,ux:8J M7/JoplQhtkM-vwS!m(e%8\6e;"`v5W-NBG/X&J^^ߘXcI[>}y Dfms$u]b:rz;-qi;W-#ݩ7Y$mkXCX)pIfӝCtOm79'Om3& (n!}{:6-e=&i&jZvǁ1|;,3J+u"":}Wr=4/wo˝ vr4'(d7Jr~*ٍ+U.4QR. n$8tF{/n`.XaVˏ_J"D +*BPBQĩZڅ?F댴ɧ58SP0+l;E5͌8"LH((7l(On{ g]v-fmہfyEx`ZA eE V`EM*sʄr6taM{ɄBQe~ \L"emV:Jd̈MF{9ȁ"X o*uFBhg<ÞFc;%py!KZKnh:$xĔPϕ--d} HŤϨ" id%ۈPu>|0xoA#)ko#ʫc:-:&\>~{}&pXo{w]ƳO6.^ "wV$ꊱlG5=2>wNuFvݍyq{ڮvkyol#dJ0v7~|σniWryu]}uOuVʥwchVf5ӟ7m[ُm@~UuutEEn% 8ZS:D؝JI{5Қc|6S2 t<+(` BQ a qoP`XA1&tYDZ%T"ZFA@J!#V Uduw gW1h'rw>_U% <<._M%潵z_>yWRz*~emHS] a11QѪd5 CޕPjwU=҉-_ޞ^ @/tP- }*x[oUծ72[zASoG&3`3?̀5Ȧ~{g7_*i =hTi <N|%wmX@>c3I!T(dt.w[%bX˼zM&6utCQB㕇?zTTK^*Z,6]eY_t.95NԳ&(t[z1.cAToۛw,Őh},u{dYš{+ٻ0}<ǥ+[acc8Po vEVB={bVظRGzx/٥0ƯvB|/38ğ`$I{݆so3R)Göۼdbz .U#Y}d6?"B|9H/-K U=xH[ࣗr@duwfΧj_Ta.f`T*(j=Uz1_e}츮2kJAAnIKzwS wfPǨ 23^tMZMbKDRí&I>?*8"r9\ #P$ A"5ו;Ձ*Iu(DL:ǬH#T`ŐEZFHlAԗOLۤ\׮\bIw:{{U2&`=y,>,r1LFZnP%U +s犓8_-w}WΛ|畯-bsxzw#G1`}`/ e mr>>ZTJ%)mIv-2#HF_\}(NAeD:Y VܐSq*b1Elm'lq@(/yZV~h^cnAЊI)h5 Z/zi \#%g٭ V3͵BP5 !U7bGlߚ3qB:\|G䋗zk~g:^ZôtVƔtO_뮐sil)~;~rL ܇7{l/y`U?vY*Ti}l()c<բ:Ť" ]~WǴU74yމ(7 ck6e>1-~I3 j-H˖5%+!bO@a/j\{0p6$O'of27?釧`~yq|Px3C@wBPGQkMiwƷ]6g5:q(cedxb0=8c$w#Jns )I"nÕ /6>Z[ݧzy#.Yx%k}akio?9]O??]hWp+|TĆ a4Q}ZȵPԆj 15\z~,Jo:EwTП,Kpͼsիϭ2w/-aM׈:<7Oǒ. Svq=H>,h-Q,C]c (B︦0eکTSL .u:7iy;w3! #VӚ9=iS1m1R "'p >ٛ񦷅WVfQQF>d=3'pVDŽ,UP(d cncYkh skڋFYAAB-'pOٓ=ӿ'[Hk5_8"ɬtm\ ˰fƖ}8ky<[hф60' 1pj* ϽBQ5O0Vw $^X a[f֠Ԝ%ȥz_ 9 S/[d">Ͻ9[ хzfO7 :KF EWa1K9gt2LI1CZ@bgĭi8kiiql0514}udcNA gq \gBGfQqUE!3( c:WKbV]!{֤[pߴ- eJC5Q9d,=:,Ъеn2X́ZNQ%p0T(@9 =5`^'@x]ZU :?QfH6'` @)2и0>JZeR:AoC64[焇ՓsU"k_/!VgQWAZ C&9#nՁ):+xʢSh0AT[M9J 2 3c=~16EY&[{j;?}wr/2%!ѺAӌl Ls40A!('yjM*;]Yyw~|]KٺӾcp =ߧgi!“T!D|fc|$>Pߖ}Ee[ZwL9P?rj?#0= _Ik2+8[~ [Ka:(+ _tRvxrF?N ߠ%zT=ᥬYȥ)T'@pC5's OY]{vUjqݹ,~90ه-$R{란,^9"BDw_XY|z>dFY?\?r6ہ ]_]m;9X.:U[EmWZ:Թ_;`8>zr1+aWǿ4zZOGXU3n0~~"#0ovR71g0 lH׳<>rAP8zV(ɒ}ɉ{4Md piC;l:vǡmj%5ZoJ>bW8U Nd)(cxN,"YAx%x zـ4#5'[hmliFvxc-lrj|(IeF٬0T< XPR{JCzFuc^,E6^,V8iqQk=z; :Hz" fcw)7kzR~Ձ(5.1B5Hf&& EյKa sֺ0tE(I13"iH̏Ѐyřjμ-HdQXz -KV$3̏Ђy;MavUӾd*X7xW8G(RY~Y~nYeϧ |&-9Ceif5sԦL+6[ YTmFca<-bD:6"}({j%d㢰z" ImǦ~&8E}Q}i!}i6,0Le&pPpl-Lʦҵo')y6,FS6.|es "%t34LOvh8.1YI)UY{&eSwZRꎡ[퓟IY,I)|÷ [WjT.eQ'ӎ]vώ7R')j3z2k\4f%%sEd#,$4te1Z=]܍\ qIR LD.l 1?Bum՘yAYfN+ŒDZQX 1PaCaPhͫ}& Q`- u1Z0_/;j̼1O9̺Vi1w6CtyvsL{SQ,|z76u;f_|O*l}1oN~w`H޻ry|,_c25\mJ か\_SDxPu<,M({Ӿ@/0aE<7|M%./ӦJȓ.0t5'[FCo'߮Nl}ݻAHoŧyOtU(W(^H+LJ5.cg`7I4Ut;_K@}uq_gڰ5 []pzY?'>֔v}4{#VTzb;&c)JJ]/%%" g;M2JKʇ1s޺2"=Bl!6@/QSfnZ0/nNx4#hJQb J 0?B ⭓ <_c20UPrr|\**Y>Y5#0 T|ZV$xu+y670RSf-%?[cAip9 $H{0)3B[W.g.A?5xfM3mQI# GEΩ"qsIJ DJ1W;3m žb+VCX'jC-s Ŗ#(`lvkԶaі&IaL"3O"L0?B AZgQɒ.#PORDE* `~3/(94hR*D k '4kWXA[֚y_c6!kb$@}vѳ.c(`^[\Sq22JdT<9q母%S̏Ђyck&|5)Pu֞ZzOl]3%F-)+ eehFFPh`pbY@4ϒq{P1C)1Z0/ͫ>m¦r<؄W|hP,g9%2dQWAÜfa*# $WJw-7UUރq'r]R7*~.Q;r}&dU|o!c!zXuyj`'> rvލh\MhԷѡV<9*́-CA9O1RsDFl9yG].Ry7Δ0V1O T[M* *H qji2??,^ K2x`\xLUՆʝGKwLo*qwG!j?SƐkxaTx0+‹O>̨N u6_ м=tj9ة=x)|Eu1)oGZmĮ6$xS2툍c)hyb~BLO59(,gŋv򚔋_6Q0`QcfTJzSo z 'ݽUy0f^E [c콠gpE #Brqz1 1?L`Z|){Y.4"30p+?B Ud1xg׋tv1xSp̉}L6WɁ;F"xp2<-McĤX^wfZpr[*dE[aFlzrm:̈́DRZԉ]sZƨ=c(phb=ASU0ziO?{WFdJ_f>  pcǽ,!)MlՋYSbe)ueؒYU̫2^!9I#>]J(yC[<@sT9H{@'w)@E6<5l@ 졠,U"]J(yKG> }S<櫈ADF#yPҩ1aL `Ge(sPZpQ\lQR9 ֦#2ߥ!`Ģt6ch:K("&3rjץL *iBJj圐(hw)D>dF2 -sHGnQ* #)oP<@5s x[jΐQ#lBI"RB,J|$ZȦiL e<;x]J(ywU$!LH!fBW޷66A?Ζ0ʂ&( 6FuL;X#)SHQrP.rᖗ,ݎkxжaȏ'wr՜i9sCG"4^Nue#e͞~߽d^wHFQȃMk"r֡az|7bQtH;݂/GvC{%MJz|[=g9;y~G<)xTxHmmz7ÅkOO'ģ>M&{[7[qA)nk-!:r1&̱S]J(aP?y ia1d ƈ=v(vJ(yŶwJ`jxVx|/ bLv#Ɲit)$rq-||{v{`wyҹON*cMOy*asa9!Q$DIRqofOV~kv۷"!s(}k!O교"Z. joS^KJ9|$8) Yj2ߖJCtdO0[RKAA,8~Ich>8~ Zt'miWJ3|9բ+\oq6^L}ܖb]4|]UHbnuu[4Oh 8J dV@%0JM4ӼXY XYF3nI MdZ"@H$s᭲$&ԔkTHܜT48MicV]9!W(ts{-+?|rlb2ivt,7RZn%svVC <յێV[_y{"M/oߦW9/(ӃpcUg8c }Jy))zNZV ꔈkÙ.7}s 4K# i3X%8Zm0 S9M}ҔDP cuvm2 {`]q9dXIT6QUb b$96&*±0XkNGϢXePd〰GF<:*'xʀZy2*4Ę16>HY0RF.z y@ YCdg 1Ka9atdJ6%FpiqY,|l2^o*7 Dru$|E5S7u= r "pjI"ri$ γ{`o[#(ZRػٶ ɿڝ'uz{-tmEv1#gŤ穇*:B3xEye##rK>bOڄF`V m[L#yF=.nY ;B- q!!@"$m$F`x*]pIS.ԙdc9g&x҃D9XKW&M1o?TyLGP_NΖ+Ex6"\SrL}ؿ';bv VL~x jygW7ȫgv~һxo`7_2g]s%oZCcEj$VĺnhYrՇ_jӜkl} &y䢭A ];񿖏?,pA^wihӯ/ͥۯjZR5\u8V^n5owh`KzU`15$ m<`6Vl:/6Ւ<S(3W/ O< J.qږqr@kαC pH̹ӶO&)s.ߦ'}>M,_tbV|:ܩCsz4GF὇:W( .ѾD/v/?9D>?EZAT9Yu`p9=+;\PG=]fX#=1Qbxe_¶n0rD`"kzBmul-;lR3D֟|^ػ\_F'"P!0F?-E1+&K΄Jqa5' ) 轔L%%hm.dr K2`R%di/Mrodg~~o1z%p F&txo^)rgsGЍ"EBJ$%dQ9H2F+z(z DhG=8EP\$5pi4Z#9AbIMR":Xt^pvW2:'1Cٱ=V:1n޳"dkD|t?35|)mhȒֹ?rYSE@Z &ܠN)cjOV/mQO۵7έǷ{Wp@ V?D?=@-F LZ>j)n|['AOk ,xߝ;ntݡ:-Cf}Fug'<>zw02_"[-.NTEe{6"S״.&XP. Za^XAolp23&*%ؠ-c"`TJځndT$Թ#cʰ-Q,SQ8cCtsZ Nd ]$ }~)Ƨ/ZNvPr}|0-^LyKW-rQg*sabITImIaň%Scv4 /s8J=* ʷ. ?!E:$+.s%dT;AEXG1Q4K8wx8, foIW>2&Fl ce95&@0fbNED  ˑVg_O:|w,rY)PNʉ@kS 5J&8KK-@ @ h)뱨,$]wَKJByd-QC͟g?F;_Lcum!&ΫnOTy›aNo߿z?Knْ)y0դ%R`#2mZ϶8Ɵj+|Lq^aN;Xe.}nL^71B1Ⱦ1?TWd=&eѢ< Fr"1]W.+L#pgpܼm(~2ܽ/Qk?NKeLJT*zs ov7-3Xdw.ćjO˧c[sm}b~f[{M4ƜσQNϚef3[k$wy͙wG`4jҪ&.tV mZdy"h\`LƣeŅu[&[+Ňk]u־r7βE\HXՏE1Ʃ{/sRG+T?P9?MaQsLǿO~xO?ޞ 7{skX#0~ ߺ ݮ VOo}AպTPߪjy)wrKGP zU^[|~= f>n_RVgvVez7L2\BV_f7ՃYPǫBnV>jѰl o۫Za7/ۊA˂|LeOf)U_qv嶵M]?{ܶ'5*fSL$!)~ D@$F<$c ,MxCLr+ċXŇNg߂;}wߩHI ~e,ё2q8M3h[AM`Qf3#߭_> z>'JшrNo[nvg2 f8O(Ff:aek&B ~ƆBiƉLQ&,\{3M .JĥGZ(f'"tj,,Y.dOČJ%1QFƔqܲ VI]5Tm"\hx9~Rim/=]3Y>J ,CVuUA9֪<L(?rC]_\xTGae%g7WgM4Bcװ.?oueVY"0&6kyhKCE3ɔ!6{I[@_b⃌Q<ƈ<7'YkmMJ"ޤޡN;U+TA]w\ݥA4+0̏W-Ԁt&kʕ•qVgCVq(ռ16Z\0%c8MBac6LtbrKPDΕ$rEJSv%Od}i"D˜$J)YgĦiD5iL< yJ""`Vv[d i* m 5|x()ػxGGM/;C`f_ z-P99OPD$IQjHL HPP$T&3mALDy&vā謷{@@6&s8+|8 Q0c߮F9fRYgŒ(jWŒVއQ>80a(VE@0\\M]eeZ> UV/T ,6:+lԙE\T{y\9+ڰpB8Z \W ;2z ۩=si \QCr-qW ( @\Z+TiqŸ:+, W(rWpj%c}j* >RfWHWpj9;P0Wĕ @ĮPv&vj%u* P `F3B+Pˉ;P%!J wW XP P ՚ޯ JYq=+-1. `ܱP;@-B>>L\Y+C}}p(@v:).Of:Ay D%w|TGqTl-`巸y:YLo2UջϻX/U5&ۺV28 0(L" !YD ҋAq~2],z[/ZIj zy=5xy'[wӗ Ue4v˫'FՊ;y)FKFoYN\['/E8oF`1rs?+֩c@w~VOGSlM0YD@  :}#\)|o3'x@,^tZaln>[֓N6bY n/:6)@S. gl-|I'CHt:/#-u0RѦsp3]NF>6f/,}vD+7LK((o:UJ]JuA~J]ܺi͎S G1,c*;֏^CIq2gWΫ׃fj&.] XBk%碣.jtkƨsq3D3[6-X0 :3Q}[C{G{&PC[f]pX nr>\( IE(DL(2LYI}1.~<\f#Lu9¢.*nl,!y t)JęQ<+{a?ب ^vG -J&TYգ:fqRga5'|.k|m}5aþl/+х|3gȗA:0cT ތZb~//_8mr[7׷r<-B!(MNBe$2!k(cxLRO_B& Ti&mۻ}IW¶EM"FP$N־l&&фj('ͨRhGGl־c`=+6yٺ'F dv!k(xJ!F8(dcklN>LM^tgH#O.4z}xGwY멢o J ;2g(W2W٨V>4~{ dMFDQ.J0;ij' ⪝Zu TV \bW6=UBsp 2 P}Tjb_en9)b#h`MxiwJ3qV򍠢UBGtn5 ![ ?֔ ,qϓG{f1X:LPg(WW>8*9>80Ti@\\\hJhL@6\#+VJhI+`.9(gpr3+ŠV«q*_)"d]JoW>v*+ppƕSۆAb;jG˕ V<+c GwB  P-)T)38D\YV9+˩+B^<WRg+ްXqN`+^8Nک[슷:R+tW(Wpj;Peeqj8* J\URWR-qu\ "W [ @0\Zyq*U.: $1R0p jg+PicWRxjReܡfkFVj%}޺"4+ 3cp @-ԮV*'WĕV5|,@a8(WPWpj;@eWCĕeL62`3Bڙ Rw\J72<\M/FI/o]|Fɽ3Nоv*+Iyz+WЦHc \\f]Bx\ W*cL.5hU˵3M)yEEa|whʆΤo#ƴ+WR_IT&Q/i"΄ Xať+*"]qQ-}wQeNbz,.6" XhwprvWVrw\J<+R.`}!g0@P CĕLR@e\IpT޺LjJ qW X_@;NNu* !JKzPPwRc]\ @B>v5H\)p)vt'Ԏr3+PhATɭqe:+3BƸ+P{ U2J6lz9" r 6"J$>Nv*EBաMO? ;+k+ Urq5@\1KB;+++BWx\ WhPpqW(8c]ZExq*,J \ U!]TJv* }_`%\Z{gTj<+:+,; ʵ\ZTq*+'1<+m`. KE= G ԪS_v8I@dz{;`'`L@'K o?er:}?:q%u;d͗ z7y%g]RhO4J\T ?N\S?_`38s*V.GuvCTi0: _V6ߗZN o`ܠ2H{'*pЅRnC܀LS|$ G]pF t³Vǫtk-ePڿWyH7oWt4*# u /:Q0(p`UZyjrmt泍Ct{ ;qO;FInEP[%: h@w~+J#ۧBKkܡ`6b8FRFj@%> fMFX݃C5!~\k/Z.N%T \)CZ{`Üʵ\ZCYq*v,bTΦ;p)=XD4~zL+#rţD c0d[mWJjKk߉V\9xj*+ *+PN(\X|$Fx`x6]zEu?ӱ7ͫfcYWF>I#?`nF;ijȷ-{v7h7&x9`P~ O4ЦC6Z9CZEqPg\lKZ!bJ=@[tW(ء qW>"*% ${X8+L9wW(WrWpj ;@%YGJeKΠ"kWBҙVTkLq*)Cĕj@08(PWpj9=P%3WĕRPu̝P;BZz\ W8m#57U$@{縭v⌆"WJTEɎ)rXd( Ňp'lSc'6+DkV~eQF_r* yAERBh vCWP:Ӯx ]zj0b5+LM6tp-^B¬:]J!]lFt-jֆaՅ-#U+Diц!]0S#9+ku+D+*OW2mCWχI 1Ӻu*OWFzte:uҮmy 5Im|W YCWϐl׈BBԅm~G̅5tJ[;?%)G A3Qȑ]pPZK>hRǙO~+S@ RIncHyGlfuZo&jCW0BWZ4[CWOBWm;N+ߞ\nՅTeᆮ]qNS' S>++j| h ZycQ ]=4 Ft^& ]!ZfV ]=Cr]/BN}&M e7%oVtʉ]!`> ׅUy QrgHW0>=6tpڸ-3*OW!]Nv,5YB]!J7t JBY#eֆ0Е ʯ J#+0IrEy}鈺:]9&FjL=kuEFf֎_)`1bG;2AI 2->]i*m׈(e+H] v Q 3+<ケʝqQU>r`exmR/V1*y_b-ksANvDu9FPr{APܫ{M/WPB̪ ]_.3bb!Ԫ(+Gkcb?}1uo-kn@F,d2S({GG˞{GӯoA:> |>c1qoxN#a7/co_=CcM.]ò;-0&؞/t oxhgu㋞iܵGq|\j5rkp(7gq{󁒘8J'Gc)FyE;L``h}0ǨCn]K?Iڵaj Z`oa+S*ytz . y xw# zi(|HEU|}{{~v}woM^^ów?x{wxşO~$0uQ^~yݴa/ ַiw-M4ZY4B1^ӤwkLDԎ_ lw PS% n2臃 {hV{Uhj=J"U~@ehZ)\FAX\_RW)C?E$k̏T1Y&$rY| @&@Sm %Rgid4{ie@ס+q2pT6 h(LrAǡD r' ѫ=<>RCt5c`m7p^頃騿FF˜N}l>%ۿnW>AL ~͕9kJ2 Kx{q9?:UON;;w:lW42U'.<ޓ*& /xQ\@uP%Im:>6p0ʸ[f Sta3 g& gIu#fx2KZVQXvQ%Ay$`ȶS`b 濢?|_]75P_x~qsY0sxQғ;w2>"K@"XH()'Sg٨K;c;H(ϴܘNfYS?+3Ug,зoVɫY oqg6\)pjz z A϶؜wd7jh6!Qe_2;pE\',mbXC^́%54M!5$ 0E0Jvt>i7HϳGSIN5B1 br7:=PU2umdF2zt2b3=t8~aG`yS6d&aLY7qO!ؚP$*gPYjYS-tOv#%O['-2mGeP+/^=:c ~l $w0Kmsl7JC}` $ g3L2%8%;/[S_җ424TPΨF]q&JA-!qLR`|PcP ] <1 tVcQ[GcNѻ/GKW ՙwyMw|N\W+_613,Q|7Y,a`}\,['Pv[gL=< m-}ρ)AmKTgR{є`6[қV ?52`Xs_qsG^<|K+ T/@m(sٵhT->7|N^i2}_V(nҸRs ]۪RY6&U^[='-<3ʂkbfپ k9>ꍉ0p#HStD:ʥB7+kĖkbAkbkJʵi&%=>lG?7ڣ ewJ|VbW-v3 iojH,-׌vHd}HlrT9=y%]p^`^}&m&-G)Ma =Kʶ!˙\q z}g*y{cBX- 3̷dalWM8p #:$+@cYi4v1_X Lql$;Rɒt$mITխ{=Ez;1s~Y32c>//>x?.xCn[[z@\_ݏO$W<#W 鼭e{3..7"xh?,{I?9$ty(Q݄,硛4]-7u{aoӯEU{ e9̛G>N˨Hܙ|J1AO{~d0v۰-aƝ=&,§xu~zvw8>e<%)1\bQ֧ ȫ_8NI8'V|0M}WJ1>ͳx@4l#;Urzl!Lor n_޶/x[-!C/>,]7il3\ãO=],o؄|^?`XNvZ?:],vM\g_3džOlRltO8hlu<Ѱy|`hۄ:LbuVÇ'ڔC|rJ;źуCמu7Ţ;ON:~y7cV0lwgYکyj^6^б?DD~6hi )$ɶ żfib5)n]C}ɥ h؉0"Hh;E![_>!kv28N.*cgH_UR`#df!}6 o`ǐK;j'BIOF<,//gW!y{" J2F JI ub@OOrKkz\@dtٲq8r+UhR6;_Frfg)'"H"s7>u&vлޟvcG݆:x~pt9nvXWF,*r{xlLUϴ/Xvɏ/~;?_:K3ڏ3'7K<\%??Z~Rk7VnC|9>HYPIjUXV97D[QşF7V o4FSAɋ(ⳏ؟m eiS*ٖ'N`WgZ}UA>[ɯ=)7'tӳz~Xc+?~l=zzYƵ'}X\^oL_lFo/9dTF =fe)yȑ˃)dj'yrmn. w$EY;mQxe!}E&.:Q { cPwC;lJy`W &f&&<2Mx=y5oSVlZv,G=Vj:f]L;F4m̦Y7\yXa-~?զNtLnu7{қM>rGG6ds䝖a~yyxOO}~,-{*ܖݯz㮔Of<&47iz'Mcl}SvYՁqBW u) zt;(r.q8YQ6:G'E~R֩>s\S9EA)E cb2^Ec[*4FY#3%1"|4'~>oݎnKݏ3_C k?7~'.څަAيTDJRr_lm6_lmHf6lmfz P-`>"Eju?q@Hv{@3)D&w6*Re"2K'v3r7by9ߥ֐75ҶyT,kv\ P6'ήqfGʓsn}'vF;'Rzvh[,zl{w`⋳ p,G(nzjcOU}* g#@QF^HVvL â׵՗{ ۬G:0l6v1 G^lrA8'HeB)xѐeNLjTE&(/6P;g,a_Jy' g}]o>ka9݈E߄&(_f1.Ft(|^s*Yq=ل*W[EgH2ÃPqNyy/Oڲ,cDlIj3޷%B)1l?㘅=&VIQݻᇉYD$:jQ䝗>Q e."T(${!=z .a)Yĩ|Z>Z(Wa8(?„#1"'{̗:+y55(PV{pGIF B|(w=yD^^?bBsZ6ȭM) :\pr%JcUx#&#L8uj@|o{dr cs+ٞ8[[:dmVW%dVL*Ujk?MK8EQG[?սFdjJ5ka Q zgɤ-6JF+%%^(ftB 4>mVVXGp@"l AB|*h*A=(sSL+!PWJ Ń"6&GQueE$P.u32D9TO`oO FNW(%'q(7b6?s]Mܐn.',Г0āՙ_5 IO4%(!Y9BŢ>Qli+$R$ >2JGw0yӽ]D}yj  p8KtlRFJaB ﴇ43bXANʾGT8ɬIH$S|PNڇ!^E5}+"dZuH: (6 3\5JbBr(ZnM3Fjj(WIEd*,3غ7E{RJ2 c%YG])m U] TAX qմ CV"RwN97. ۬ZMoVjP_x=dem*ha*xL~~Ad<# \HL5Ty:fݽ"zEOLoЫbjb2`\\^.2Wݵ2w!z{t;1g%AtS,ٻ8cW#{胬8v;l h%93 \Dv^==] ă Yi2D)a9$UMyuA{ v~ IFs;5?r/}m0Lo84<ؗjz7ّX_fѠx\h>^ j97{WOF+Đ#ba8_0D!efYA(3R`i@Ifd3RKe1.--a&?<(\9•'=d؟`^vfu\jSFrW%t!yi+IP*'$Pby Nl>ye[LoSaGm[{ISfԧnWz[t'nnaQ8:2IjMQ'Q ƽĄeO~BL5 YmI][vqB؃C1@""'+cԖS:]&zdA+%Ke=A;XYW b#8n?vΧىv^N)Snww|(XWV]2a6Z2ryseb*Hh 6RStj$@s9 Z|m4QX#9H0R0,!~sC6H҅B| Tk QVdiKHHܹ``>uw/43dSqVMNv^~wnEnmE.\z TJEfp^?>Xu/{/ >}m~~> [?  U϶>r\v{z`՛*m ^gdɾԟg&l @jmnF!,4XX Kcai,,4XX Kcai,,4XX Kcai,,4XX Kcai,,4XX Kcai,,4XX Kc=X"YhSJq*4Zm: >Ds fy8<`+&c$s]v'ʂ7C<;+\E6|Gl;>*_8 Qܜew"HGDqv78'2}qdit'-;t福ԆYn~oL[Rd73sb=G82 pd#G82 pd#G82 pd#G82 pd#G82 pd#G82 pd#ϛ.4ĒS*/?_=x9_%: So+8lx+lTO +)n"d*yg il_zݙhHr3-GvO݉G Eqwlg]x6'{^۪God`4}j>v>Yc%+7R R#EgQRK/d/2/)S/)]=>kkp5& j8r[⋊o~C, d!L 4WГ9]ˀA+u2`AlWC3@Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q @Q xޢ0q}J|(h:!>4WCi^PJP>'s 0VX nt4b@GI2FEE09IK:=[F4y Eˡ!:BWB-!ա"O7ˆ.7EO[tK7CёNDG*EB'L}d#SG>2L}d#SG>2L}d#SG>2L}d#SG>2L}d#SG>2L}d#SG>2S@ ksub\?Ћ Ӛ}Ы7%, VtZ$O^X  >Kad\Xr>g_ (^gZ˫+ցրpyi IƘPQQK8DĄZ5p?1@G\3y ?ɗ'GrAo ^u F6i>tC>Ntap ;MBp#ffH4i C&_ƣ~,nmcѬr[.r;˱|7' )tf]$j_<ֵuZm6BUso]hegi{'YӠ_o#R1YY 9J_*yESǓOV:a]`N8Wda>Q1Td,2kJ$qW:gU ) 19 ;CKB]Xص1ƺ jc-hihihiii'ma`{չUDUmY|ͯv7K;>i9zR~c`S ܘ<`~Ca IJYS! eapcgp'<@ŶZrlZ7Zr:Ofz2SMI~-\?l~1YAs Ag;yFYj pTd)&cKWQd-e~7sslcӋ|XDyj/VS{y6dYo[{򇷃LYcoϫc 5.4Z2/5aV?s ԋ,iYo"'X|c>ifU^]b98*ym'65񾷳Lփ{gaU^'|qm.<lw!l^?T{:yAWߊqdC =O wgVx?ՇZܯW2mT@#tKUl6{\T tjIf8o.;-էYm!\:f>Xv3zK!si?i?XA2i>ٽUDdt듄l~ecXcx]+1PQ_Ѓ46|3 /m}*#7?umOm}WӼ16R&>(&}I{zr.n1k}> ?߽?߽Ç7߽o~x=0ye[9 ?y@=?ѦGFhI|E\+z`6\* f~YSޒn[oRRg>*10o o۫Zq+fucܖjs8tP%ҸUR ?|uӌ<澩vmy oĉ?swم7Tuc_ډavzh4Zϔ0h~Asw*I!3—]\{*<ԻęRd9^rbe ̏~ -5n )E-'"Zc7* s2. ԗL3b͒CLl>~&uUѶG\nOɪ}rJcX!ꔽ5;y֚e9ٹ!xfsynsc5Z󫏳rRP6e!&Z2ח0Q2 5dsGq ݥEޚ(JW])ZuQN$6Ofɠ\7E\/dA\-m7<}^۾:cs3vwFmIkqirۛVn7$8P,y8jatHqO|l&SB 9.,&AxuU-Eܲ;5^l Cǖ YEL1a,v:6.u|Re*)}!!,fKD1$*n?Fpg}.PKoec ?SacwJCuN廿3i IOWg÷36^s*HY96S YgVTABìlj=|BƓ^ڎar8CP3+zU GU)8(󭅐%g\2٧zw5X_GSSuU'pR{zFF;-+ҾV"pgZ[M10˃y_wh;EI(u1S_(8,|Hlh}d`169+ +p~JCԷUTJN#e6U͈1YIeIk*4ՁcȈ3uuM aDZ,AΗV ZNQ9uR(KH([NbuU "z:= 1j{ywwLIMڡZle~^T?vOBx[W_)_z)?< ?U|PQ?{6r?%Wc!$I[,9#[#J&%F%͈zТ$.vUǟ&㸗#ZAk|;~Vʧ>;S֥ej,?G-w9ٷ`0[ihWWrt '2\֞?V.[tpu9pU-ljO0խYjGkp +| ڴ.`u: Z`*[ RE•b\~Ag s?O]w?_լsՇ ɻrRު7f~wtF TB*$WKeauEh$cQJx;xF[ac)[%T$6H$I+'fU.-Q&49ǕoktXS=" !(+4N'.ދy-z>EE=h.!ڨ":=$N ,1o,EF&1N=G|BG`@\I$pPUrAZ:hVXa74sAS.C0Q;ͭ:A2{hVK4 *Rgd*>=z8L~𮿉c>"|0hأrZ .&Б*xF3n: 5, 趋^,U#-~5 u[FEvAyj {f΂\ o B U̚r2!,s>XBҮӾSzs;$v]J]ٕ)ݰXRkDu~-U+U祉דEx2>Zp4wIaqahBY0IPDBV1:I%c@ 1Ó~^ZDK@dCPB$5NZE(Yh)D |88VBX;"'Yhǣ.VzF?cKKjOo7\Oc[3Atk99˙M$(U*"8 =-ա hɺBeҾ K@aAV1a rhwHGq )2q\D4Rb%DLk@r<ݨ_ǃ_8 6jwۙ_z4w}~~.>sq2E6vkΦ-zt7-^uFn4m2cY򵸳8B˪z>]_u|uQ]dMlsYo՝.Sٝ7h~ ?y>ز[,G8x; zCQEל7~ţ7(/UO>b*~j^ :/(^h5sHv@Q)ķ?2E]D D䳡syQ~Hh>\H;ӐR+_aAWR;#S5)ԫZi \H؉OE M4$ƈTIImr&rcddN2Vxp&ߴq$pifB`13g{aQݺiGř LOs׮ɟBmf(Ӄ4Q_,{8O%AiR^(EG8h6Cppm]`|s55 ; t˗V4 8P\hp$hN 4I3]pRAT9ed Q4 V9CJ& 8&D"I ALdR$6 V#XQڥtP4Gכ;>!^Xǀrq7S u^$e2H$ΥgA]@04 #Cfa aC \bLWĀp!YNXe)T%w8k,aMM~:8n[9:Xy!au{h$ $Lk E)@FsHR6+, e q4AH68kyAVEQz۳jd]OYy{W~g.na泥YvʳNK'UtJpD* #FNIO|$"4WB%"2P g;HknmGX͆=/n5^ qacSD$IH 8\rR˵s&"bܰoSH4!cDc}~l2佟d~:^Rl߮fVSJ[)v  %扩9% -ݥ0ûv۶~jiY}Zb\}QE;frVPy,2/r䊣7%S8ыܪ_؉n[O47'TNQ~%O)ii  Rh1i!H /a(k N hHG ;bNg '-䩙J;Ƴ-l7؟g'ֻVjT~yH>4uFIpwOaTڭtuu待rkR9⋱>?^^չ3p$,cS\, u@OKR+&I8 TXP6W]mWڦɻqaί}[ztR|nkx8VeT,U>2ַMQKFPA%X-h$ IPD*i!1js!'jD i jb,nD6rT8ڞ~UU (ΚpR C8.TD-4 JH a)z:"\v^G\Җׅ4obtTemQHB!gI#6:$EU!R5.-Q&hjOlvƊ_+wY9JwzJ+tk0ON5xvD#|Ro pmZ-{(Ms*]Z Qj" K %4*D1cǵX]TZiA\U RJ'$it.Dp\Ǽv1" QJ`4}B[ѵz&z?+35Ӫ;tO'~E^4_L1@d">:ýYT97 Ikʈ@OLJ&0`%pe )#cY+gRiqu^l$Z۔wBjů׍Vs1֍ohbup 6E0$w#q* *K @a#Mnu#iia ֚܏i-!j2bK~,Y*譽kiC^n_ndݥԤ[*>v`eZ5-h-gӳOD?.8J!Mu 3Pچjc  jҚj%Fˢ>P%-%=w5Ej>+ًV抱u!<-UU18m9nWRu9a=Zχ[EW=wje˟kq}*Wcp]}+ <|'w_ɢM_/&/L̋ Q!j'3Koe%ɐU``?H ؔu&xka]Q~CS/ݻwgKԚ}xQ WWi*h :[Un{K\-6top_~~=k^uy'Ug\fvr1jvi̭A-7O]ZN2荺'iU$t[VNec;q8^Di4Y' ȓ Ѐn_[R-n͍֒澢Wg#o;2@ףp 9uӢM9vķL2\ ,>07_N wU*%B !G݆𾵪*xa(AWVXև(/4sf@÷8HZp6?vM_qGs2w)!ƫd}nzNc%S3,syOGNS3y+([,6hNf>bkZ* DJQljf)ozZ}7*c QkZBTM(1 b]r"^@GfOu+fa0V3Xn޾cV()(mҤe#edk!)B콓K|(Ϥbq.`!gIH\II2SxZstq;u[_O@ز%amO~w荻f4cV1y1W)udX{s F0k|r˓%lM*=RigejT\H<5吴D|0E2)Mߒ$@F;k %<]|[jʸ|V*8nqNSDШ%1/ NqTq%Y6A3J";JaicbC"ߘ9< PAR2!D&W 0kcδVyyPc'9kM I& ϤpF9ԜTiP9ۉG^!s:[_H7M EB\9TPJo A'hVȨruݼ`1މT^[閝'44Ka]19*Vl'Lw>/dvu0\g["{ώ5㫸0o jnGLIkhBrETcPkQ/dzm/&t~S0K>h2xTA9HUC]!LcSyDAT'Z^Ù`}ɬ;sBc>__m=Ǜ{Nj2ܒeC]ubLl=Z /+-.C6jfN~Xr`$7Ա1E$ODk+:mXVBibp~\kFe[?c1rvz. dlhSJ}AD%&0 wm\Z(я#ɉw#A_*(<'*լ#UaWP! 3ze e/ƓP|OaбWxվ~jOOnMJC[?+DOj0d i!ӛI|Zs9oZe{Z+͝JqaݶLQ-wl[;6kRz}-W|=F(<4}<}*<HIݩiumc8Jdt`t4W6_jqt0n& fZIq}g(毿L^ qwy}Cdtk?gt']3$ ?-(zPw{-vWy"y71yxr6h1w7l/S*s#Jʾve_ݓ[ib3?N(5+ B'F .;`½lÁjUףK~)fk8j3{n6s6 ~17y`6]vڛ7}*]NyU1}Bs[r뎗)bB4Wn6Xci;] \Z"xbE3`i5l_ɺM]u1|~G`j l?}VI'f*i_vauFsENTVn1UjJV;].k^Jp\pо4>di`%ZG4uuNm9:z(Ɛ,!߃RZ+1Hlj'.EVD# f%~h1Ignf.Zar*%\lbОqA^rs4l[CMn; 5% Ը$$u8'M:J 8B5P7aJEAL[*PP)XbV4z閤9 #@n[G*OVݾ^kNXytŤE; a^Fhj';?rNMsu<}tjyXmXb&$pBJ IҜC0a9s ۦ<|R`8@s-"iEzM,yB by.VWC[ح(xzlcOԻݮay\?tv?&7|8/Vl:gzylymU'!FF9cZeJ0V"UGg(YFR g!EQRU|Y(X(2v~ˣRfY ,U$0$: #%¸ 887)-}o!ܗ_R)J9T 6#ƄFdJI4CPqrgti;Z34H#_zJř aDj 2|(֕'TK)`S/r[u!gMV{{]ӄ uj TPME߁B+SxJtjҮF[K,O54J9r~+SEH>s1"ԔCҞ3iBR URˤ4K. G'\\v5Rm&r+> Zosa9 L9ADǼ.:ygJQ*d(( cD$@ KɄ4\%0U|)֜KiPPOs֚AbL)Iql'Hs9> -ZfPsB΁!tn@r,0(O Q9*D^D*jpҭtMOzcrTNb}tw<_ $E.afٖ35!Ȭ/Yv_Ņ);>[ᨑ)im3RXPκ(jA]r 8ꅐ AV5K78mU۱vq{p5 {E)%R4xbg< Ax$*B1Qϩpqԋ__m=Ǜ{ ;R/*7*=keto#|ebȵʻ{\[:\m$I,ocYc@+HVup'`lTgq {K]}W+4alks&c:卮'DT"Jk3`'ȥ۽x=xw>s2{Wo%UaWsV|2Cs2I(?{WFJAvއcg1`C{R-L6lZ"DՂSͥwBʯ;= f5[5[2~<4x / v?5$ ʮ:C̳X=ޏM9Z?\*S! Q ]3UgfcFX4O752?hS_NB!`NaNzuj,.Z 88~ WLJϘ rG:_}FÒEԔ(0 XM/7)7s/+^@Xk$\MدE9*IƯnw/k$^0ϚvѲ%CޞYz8Pl<DSǒU{P;YnkҴy7piFҶ%0.t[ypMn6SUrGSX'I4~ ԖA0P00X܄V},~^ТV>X;MԹ\x=-:-x&hcRۗcKePfmOd,=qh7hU)KunD vҳ}W;r37o7+PXԤp<-qR`o'ǎz--w:pXjWԤhGǘCQ[isZݎ||Sy;eAjX<~Z҆%;S9Z%;}οxA;cg^Vhnre BE7܌`逽g~(}iX"rZaLSF#E:4Âi4ehy!͟za %Z-4:+fY;kk)79K&QNVcmԱ^AT 4h>W=ZOz0owӼ&8'%FA`@jL(:ORd$`K :>Y]'/֔K 2‹Ijp/QhRJhb行1$%@8V[{"s Y]h./{3s6n~30JfԡuCs:0-;Vӿ%[( kę'X4NK Cրڈ-d?"FYe6ґdflBXgL #UG ]%}MǏw.z:9v|&L*w֎p[Xufrت-Uoun0ҭ+شo=qM|֓_קL6<siԔ.菺dh>:ng5?.=y0Pvէ<`nYអ5nJnxd^oT7_ϊ1dWvחu>' qV_o !^C`7Ƿyt X&o# EyS.:.{2,rAhё ڲs Wܙ5: VBKl%6*~Y4S_3v)4s϶t< ﷦:L/Nj#ܗqxd#%E&iXd`{IlQk":FR'T+o%&80iu\!0-֜7rQ8Ѩ/Bд45`hݶj^xtiF,no ;g:j氾LWٱ]wp;*K }Jy))zNZV ꔈkÙ.n/?J΃qGha'J Vp8j wNS4%)-XE}t .'c(Xegu@ Vza17)Wb b$96&*±0XkΎ`USVT 99ra3ϭ0 v7 u'(2HHI#hA:vI#%aǐ3bHK Cb(r$Dzl T9J.$ X6s ,ʄb*zSAw 8nWE@LN(%l9 \SV eSHK#ip[{b폼 w<߲pIV vz_p;{v:ΔoBlEE~z:֗JN8*ΈYUS敍$-y0>iJF``Vnv?(]=vYܲ[}B- qw8I{*b\rRY0u&$xrΙ fAz(sɟ}Ql0FcI~IPEa!b]ns.5͎3'xO0㒨Op{?wgu?ߎCP \?]5nȻp7;}J?K+*&?dzXs9~׃Z >nkg1Jk)u2!вEVGѨRsN];^]7NWoZ\PJ \5(n?ڢ~eU |Xxc@AOFO7մj*n| F&jvo3h`5߫p*(NRjAmΛz7}W-yz@+~r' HOJf4(tq(J2 8$\s`=a}^xbwHI &C0QhI)KIkM-Uag&.;*U$ɀ?ssv8/'pxj]bH/hn53 s`Mmvic\og~H>:kp9l YDSC>kBVjGcg=~{_Nx &N`ii WQI0( ٗFr럆^Nt8y{7<|>nʚڭͱ1cSf!)ԍrSb)&K΄Jqa5' ) 轔L%%Xm.dr Kq1`R%di-D(ˏj~Ů=zhmcFz#P*Mq#0&MTn}(p R$Dr[N$cz}lۼ0 %:qh s8c>p648mC-N)KԪP5&-V&Sԉhtܜ-!Nʫq 6lLPv,BMOA˜/ӷC Ց:;Ӹ5<<;NZ-Fp:10=޶-?|I)@@3<%Vr`vUr> ;>%IkYr3~g>?%r\oG!>-lTڂIvyZKuXXDPDŵ+jluܩ2J].M'>a rZcjb$:"B\Ǽw6< QJMAI$D%^vyrߦyg#~cssljo4~S^(Ș,VGg"ysN㍴9wҚPENFÒ&+Jb{8kW$aqA(2R9vF T2eqڦļ`W'avszeɘ4 &a.8lGa$N' F* (Ks) 2ֱ?KZ/iS^A(1:r_ųAl &#ʼ&ww[|2 7,|Q Fy_իVgJ&NQ=]2%i1ENH єCfVclCgYZcHz, q.%4˽8˻ptJLeD,{F#'}rD 9@䌇̞ʶW]5z @AhftaE݆d ]Nry>{^0x.S!; nG#̏V_>D{b&XPA^Ax^Fbb3k1Qd-18mLVŐ a0UI; FEBK:2 buN61bu.8ELΓivgy۱j@A ׻σ"X7ytl=,wKGW-r0{'],F6POLxRX1baM‹RIt * /\@~CJsEU*$;1DŽK!y"t\ k-v>,NDFryXv[1"YKHMƂVt|;H8o1iَN y#3)x`lĖpN|ʟ~o|vLRjكEap|M[]S]K[|~Wn/=.?}~;Pw.{Gn+i:0GHlZ7~U*%c !vmMgU<ͪɗmyк[m'+h-RCV>|Ђݹ汩O˃8q . !4w_TxÌW"~LfG?F'ߒn4SNINv`+~:A>0A o i%v1w6S[\UߝϿ8"\JqL\2V.MJ0ȃ7OeLuCdkpsW^kWm#X5յy[=WLħm']5& YI[c\g{Z|»[7ىX6]цe6e`1)#=w*ԯ1 v7vstSmXqt>mŶvOM6cV:y À\kD#*QTN+b.IR}:lJi+|Jbɐ4,Jr6't* !$% ޔܷj4 dDEigLb4t >f[^[vu^bESRި-j}j~/B|i1K7 ]&p*cotTlu֨.tzu^i[9Ќ\htl3CIzO[OVW߲ Noh˶F ev*C bݖV9~]^dMz7c({^֝ye:@ek:̊k1\u^vnJ;gS\u~u6!tHQ ڂ{Ԑ.zWޅtqGay1*J' \ ($2b"ǒ7 B 4LYM e9NYl*,|9wӨ؋.0ON>ʚ{}αiѴRitCs|}UT,r OL*)Ԍ *|,{)ATki-u+ju:MrS`Ǐh 69x F`!"ٝ$E\wu:[c}ZiYx^0(5D_Ed̾BULvU4Y:TTkc%5' i̙qh 5K% 5 ٛE% Y8$cںl[9AFdehyhDR"9+KyE2.iZ&шNњ8؄Dٕ'0(F !f㪨-?/N h!OOoɿ^V/il@U.N:786Ƹ=H&ZSJZ ܩ՝Z})bfzx}*&ICwzQTfr/M^|^b(O놾tFOYّtf獘N{LA@B Yy 1'F-SB/ M`k7ӝí|]JޞHA}$vG׽6ڑ!̥5D2UyMWxTwȓFbZX8AU| I5,$o!HZ9 VB䝍W,ImDb{v8_ iQ T* 3V*-\T9M1.gp8o]tk[c_IXE3nGFU,gVVג/!ZgmhҺ֭\i;aQ,j\'鸆 F{2毒nRfd.iG](P ZC st-J\H筬 BbEv $j&ߣJ_~֥3OP}Ռ9TiD,$72J RJ$r.z!&ms\VCC)%'#RZ&t6 gh1L!'XȖC[gi5xi۷-T(Aɸ'v7nRF`Wk&+bo?wjJu\ßoՍoWͅCa&BObwyU-ճ[Av\152i^;荶lm-}ik6Chmf~|?`b 5mk^*A[d[ʤ:Ja#a.Qu0tjF iŠ޹\@U5 SW׷ ׻?|1{=0iM =Yߞ&PNجjť禳b@ /߾z#b!'Y-+3*>l^d\}'ؠ_jިTʄ(wg̙yGk eE3JX0>|Ui8 ~o]-?̙H1 w Bʊaft`pccws #8?N1I~28iPlD9If`OS( & ag>d++kG)1>GĈ$ xApCSN[ńUZ,!хCd Ѫp$FۮIq$GG\1/ R`Sۼ*Y!.([BЅ>;_yw9:?AJK&$*('.t@-#t'JwYN7g79L EQIfo4!2nw䜲s*%g8H(n1y[ F99e]imݡ$1u6ymH3Y.SkZdn!-/a INnL,$Z^tj/d?`,vr&c)oM?\Z+U?eXZ]JLK/bD1 !;`y)j1ձfWJL%Y,F1Ɣhaύs%[PW>s*H㱾CJZ,XV[fZsj| M~ޑ*+OTv~Ljv5Iʍ?LKt +TQ'mi/SYM]:T ;VSaeǒLtGY^Q1q\bnj<"53B H{oD[dbNK>zdӌhf ۛѧ{.O*_ni |&ڳNI(7Ժ :HZ)lp0/VJķ tݿwZ:\Bs?0W6dT8_Stql=O՝<&dQpoF ?ى1Fz[|/UOy͝_o/{I5[T?m[hx>w;3;Fq]ȮK-7PTDK|g FQ91`kp&y:(W"þ0E%ƕDS:-cfjP%G:Y_A1(,gL!o" `a@Յ8Dl)fFL:ZbSr hlsJ|߇?$X"f_.!3ECz;q5D_n4Qڄp2@.UHJ()EwNrh_ϖTXpy},/%Ԍh\~tr_[6Z|L[\@4Iӹ|l,V(`#C\3VYqڒp> 5]H vj#Ce֪MYCԲ4)K9M`6 /^g-N^{1D: S(2٫eȤf.X/b~,~^\'wz6oj]vsݳܰw3|4{D^}gGfΖuzqK| ϜgH_n8]mWcpk M%W$k(14 Z*N/ߝfN1;|Du-zů)ŁbyWX^ZWi'MzԔ6f'2l%n =ndQ,i$SYsH5I8|EjʔУIcF0 ,B3}oqNG u.ߛNͰwר9)g\W^AM{znCePfM)hdT5H㵹_d:nx &$Dtҡ]Ws94.O. ȵ0"q1LT&p"Z{+!0Hڏi* #m6E>H"?˯!n*\>tn5DP/޺΁"XKo*uFRhg<ÞFc;:y$^M]ePky}{CŠGL \i/ nGm cc@*&{F%,PL#(FyCtƁ0w o62*fWo&c%fn?ō:8l U73UnUoҍѡKB>i+o0Ǜ>`I i*]֥3FoڛbW7|Aͳ9'5ϕ\?&w7G(7y-қPqC7 1^o:4'?)mmC/_NQǜ=#gX%WelyuΔ|) )[ӑmׇqcbŞ8jQl,p+2pF Q•f6ƞ_*$a]@ 9ן~xtϝ;#|l) A+V Yj%"ݪYvA*ͻU?qBҜ!ͲTv i.:eY i|7݃{,$;Q4h^ow ^ԕmZ40PE6*j{ 56K.\͛Fcqf**;0âg|'|zȾ̸DZ6\ϴ҈xRcɤ XJbaRL+Wѱ4Rq<+;Vl[6OQERH!23s$d̢()uCњSbCSխN?{WH\15?,rE"b傟xd_I|)$[ՒƦ,n`c)6:FYJS)zn2blج9;n<%MQ (PWvL4ʠɒd𬊢 Q",b| Ep Eڂ͗>` -v[ ~QwT-5#B#K'KQ`U10r\|6tM~bcogwȹ !\9!E"Y{. \0h,<[[9V{#޷7@ ;T5~v&y߅F;S=D!H, z NmtC ik{<;SjۙjBϙ'  %yu Qh%.j4bߵB=a=<ݍ/%e|V]e^Gi< c #K $S^E'Ϛ@'lok3dRIZ(&ܔhu Ys~_jXx9sDl-@F2XBb;I\MCCuX|Lb}@ͻ]xT2I&X2>% c4N`KBzu6^ @ε2ukUM]VǬ`O}k;݃{odaK[ayX_O vU-뫼H^JN0] "$:640aCb`|dϢ@K1OdO1mHT6p'T$KsQKa|mڠM]yjGCΥvF40/ӟ@fM-rqȔʜ,B.*q/g.!-ߔ{Q!:xX*ٓx6O?QHlR, &Wys<8>"A=<,.eI;/0˛ӫ[l^[T<⾌},oqoh|:y深v*6lst1nƗws[?g Iqthq楶O[*2;思XF&ab']uz?6h1oUc-UZk#Z}=}{n1ˋϬаXdq؟6Z8i6(5$<$R>`eF \RYK ^iTJƩ<nn 7) :jqzo"+0#p7~ ~|E_ 9, ,f*U2J5}[*['N!H[m2x W\Fb-VɒwE$x&K)EHņAbJϰ{:+mŷ Gw4ѮA|:Dh=-2_L0576QAG(i 4THמn]eg]Y'wo=gs0zxg>G7!*8W乸E-̈́>;h2Ətg[B 70>NquƳ׷ތ6gs/^[%9}ϫf!Kb)e`Č>H0S捩jvof>OpE(.w&#Rܖg ,lqON}(\CBNmcm^.͏,c3=!+wpK`,Xޚ)}Ko]rۥ[5rcfj؍t{L;ARg!4nxa|joԽOj~(Ϣ楒zC}H<(o-{`nxϾSOKO!\EG=\\8.zS{#wuw 0!2!U10:G>xkA~V[f3;'!DdQ[ gt!h.sd^gJ<T<GR \eL _f̅m,F֚S.6>8Ҭ9(u@6miog59t=˗7TYɦگMn&xBXnӫLnp^W -漚l2ULl$NJc܁aȼ)5ʦx?ʹH-'?!:DGn]$tQĜUELL6tBQC!s0!N2$%NCHW9B/ȧR%y.75 +GbW&|u7Kk}u- 57Q} ZɘHD32mpyuCL&h{L*h)z_ߑ:/eA5 >~Z_)uX:9yFP"p UD(4!X,}EL -SnǙUԎ[Sn5 q1_  y1> uR+roqO@b6Y%<ŔBHv8w( փN{ vARP+˖XGeN&KUVzYU.i[4" ݛ1z`Y͌ 62*>z! ްZxe1#=E6zulA=Sx3c'$2_Ɉ"?Zk]dAd L{y?Q7υ1}>yWbjh 3 OlyͭJ'AX$Iɣ-"x %rI}* w<B XJ& r#]n{L. tWY@*pYR=,Ȣ49;$ƺ׍"5gG9qm<ڭy[ӫnO[/˺W{0 4+$xa^{}Ls:rDlkQ5ySc;DnHҀ Xi1䒵5mAX-̵hU wy ]ԪH)kxה- M?I 'C48Z޸0+k2IU6pI٧W]J٦ㄅww2?{ƍ$O!M)M.0LؒO$w%Ylm)٠qc-Eg–(^X-.5妌J;z+bQ7ވ*P (k +g˵5Ѭ`fy;l;<EX?ɚtŭ7ռ^ysN>QN?GݖBڢD4{qU:J-nWFhbEFŵ(flJ?UJ߽#|[S''$4:"x!cݡ$O8 SEw16ʳq.LL<:i <)Zj?<cIx#Q -R(FA/p\9CgA;$cxp)$O+a->,ND94|+FqwE̊{K~emx<\\B^*70e,u?a$'pۊVBޜ|2TtW5 ڭJk>*JLv۪O%gao 6fr )֨$i ` 11qa{G Qv]u)4\hرC:ڦۍ/;(S`u2=CHʮhȟduJI-o,_t# ZG܌¥loNʫ ۏPN頍(Rm^s!e(5a3Y :N쒿FMS--<6ʝ# *j.\ϹaϏZo@]cI5)ة:ڱ*UkPU]{(=BqCNTnzLjW5zWfE繞ׂU7!/PPLzO ''u¯ɯ+;z=Is/'^_w%9&øB?wQ[<3}?*1x/!F(xyN%rUY4|3&J$*"p9<2xۅ:_ӦTsty,v|vc)D1yU߮5ߪ̈f/jDɰ? (^SQ:GS^v?}YLsRY/NŜ߽_=(0J\jruQU&cL-?xT.&ꋁ+*xQrp*S)yW_ \ e1GWHh*TIUu՗WR3%v%5PGWH.p,pL%lW_$\-NƜzU|ts6sePNP> 5۬;AN:;IRR(s :4wѢk+>Ύ->Z [CYm(7(X@=3f)'"2F,עI ޢ5Uw;W`V8Ac2ikmJ/.D ƝM8Lz`2I냢9 'ʂ6 P%-!cHqwDF1[sXH9q\ҁ g,IVJpac % e8]]勤q ğІ &gځIfL)J,/cwhV1cp^ťN(j'k%)a%TDm&)B` 3guu8i5-G~m/踾~wta 'z|Z/10 X[2zl4 ;hrօF{P@Q+~rb &5k޾6vDAeߎ鲶Q5!ES(PxdJ(! HY"gǠim6sG 土)oeAY%[urG[:R6f!,e9ݭwiyo|x{SC=/ܼfO7ž.{ݲm\_qzC֜MtL Sc_.yLZOjUStȦE3 7 h*Z7 T"pY^VKm)cKffx sJ ޔ)IY:KXY1Ysހ:Zڄ0Qyt^{Xb>P"#J'Z$xҹߜT x$Sh-\XN|H8I8-[Z#g%a eEIBlHWK"Ǜua-7)FA^\x\'Wh?{:dȌ,{ 0&0%1\B C N̜?P1%f38 {ҬTh YZiA@(|$>9BI/HbH4^FmU5oI&;3[e%u $՜Vj5 V*s(kYmoVݶw'm/͖v|Sr2P)svRI;;'FxOD)/eC%Uvr1!j&q?ldCD1䜘J2XFΎ`%(Aլfǟ_V`Az7(S:ϓF (.1BFU;iF>t2># ;ca\ )Hԉ*r#]H6(Gdu*p ,B[c ԚzY:b /[ۋV@´Nd[(Ѭ9 \@)+H2Y-"y0ao;;U+Z?n['ף٭˫*!~S.ni'A O}ڗZ=8L8P3+ g@IO|$<4~JE@Q =3<62YͳpXek/ı!=8=HD=T1r^œI6 x`X7멠HsΟǔ~/F@c*d8o/\IY5z'`]ns.•țgOz`BKA\'BҟڈNµ*uЬ&* %HH\]Sb| xaf[))25J]VN" %5LE ΂&.;T]W[]oGWew8upI6 %qCQZR>$#QR"1lٞi{zHC0.p,8a^9e\J2.+^ g4Up|3%%w"M\z"aNİcV<rGypg =[紥n ÜppE. [=3s`u9\!K04X53 j! nb g =+6g| v(4ce. @5דooFvǽ0 I+b]˲9KbA]sr/smvJ{ijX-%-6 nZ>c*PoVR >5DkO `R!C+#YQFOO@Hq&` ѹ($W7 SLEƲ)T YO5^YQ!)Dc ٨ gC6__?=f{cۛ'GQom< ׽6msv9}bNJovӬDG"I(gT<XP9X1MĉDSB 8ZF4lK,H%R\'#Z.@v(s "6sN[ e{<1*;u,b^oktfpX/sJxy.;y{ߺ (jLҧZ5 #0w{7i| ͪ;mvE-Cd#;TmӪ%yЂ"LLR#)/,19$ Q@/Rl@&A"RPIi 8VGʜK&r-&q@JgΆ )2yZp/^c-FDRv۝zC&Lc,.YǣB.Ga]̨Dy6!ix"i9j9?<ǒ2:GsъBk/~S:Q^!L*)!׎;.] SiZMq2r Zܫ#bp՘Ql 5<;Kx/VP y#) A$@B Icz7@x &iIy22/E_|?jg앑{?dW*Y:q.#c?U=TC#VjpqǗ*\~`ޮx?~x;ҟp1O^w A'ק @>aе)57Aג|لoү(IJ2] FX/ )L|NdYVףPGp(g-}`RmQuuTJBB »Ϊ9AUiuCNa%&vV׌L>O$KA1ۻ9.́ 1"w!!ƩdKwbLNkPyƩw35sVPFQlh xHͼk|mG/ zr`N) hr ݨ$!e \ %DE!&qzy"܉~+nxȆ>=hodX;lu߷Gb">;9"3yer㩏m^,rkaoګu8mZ(RP6Jy-kQ&Jc_TQ} !qvnOrVT ,$- `} `IHcTrm$7c}$g6g3Fi5Ve>!,ԊfKDې=~y:mL_(1sO3Z*]v{Gc.x' Ĺlk?s(Hөޠ]'N 0_3Z nj%=Ttg6Lc_nfzSS3eEN+m`ڏX{j(Zƅloko%^yڳ2 +ߗbc#*2%xkRS$6$5| Y,~5홇`CQe9ד(H;7hp cF56W|]$CQό`IG)8#! T{*w+bRQ^P)׉Et I.7$y ƚk) k,n_of;K:Sz!ӝzV3'S- 5D4Bht{n}3ݹQcyqlSwh` K!%Qʄp$i}0ahs8+SW5(vR@RN#sIyÙ3r.V$%GHaZl8xF]˶X07Z2cӞn))1瓺|"Mcf "zhQWi%DK Z `LR xB)oڢZRZҒAE QpB),U$g$ %(W(T$kUڏ(#Ă͢jp1&D?Y*mH$9^k'.Ghҳ zGceX'O#jVK B(G&*T R:)l[HcK*?:ױD8pq<?Z{Bz;'T+||uNQZ02 .d*=RidQcyy'SE@>xj}Ҏ3iv!R)w URˤ4\$70 \4Bc wJe\ 33HZA`x3A8D*$hFIأK.$͓*hϸLH)h ~)6ګOHĘd@(n>׈v41Hv`|نyGޫI})biS$BR 2ЊUl;P{ӭ閙't>cW[Lbk%'K'E.qwx2K,8?qVէկ!OPYf6x|u&o_W>eJkh' Xk|`/&Z4Yqv QJZ7[wmIڅY?6JF\cE'q9^' rhR|a_ܮ{jvB^N~Wa>Nah05.faE?Ȧo^Gnq{ߠ]lnVh4-|b3Uqbfr]h[,s40OZ(>-t]:aFFDnp;ѕ>K;lʟ*dO޶6]= "[ǪZr?2e$[ǽ֐KIWz&==Y?2}Է fѽcm>|x=])2>zȐ*˯,;T=z=>Ӱlb<4X^MDxmyW/,h;J5gAסW%-޽x (/*Z;3ޞNzX=<})o έ@\1Q@hNmR}r";(tϥƏE<'`+p6߲ r$0Hzx{u{+c6xj̜:77i6g_'Bҷ_PwngPUr|f4y_W@m~=Y*r]Aj2%{MVJ*8#FI[}4X4Sx3}eraY@@U9K7َwl`1CmV6bM^wMro6JZ2<14i'{j `z9Vyk1 w1?n'co&~=V&|j/TabtRcQLV2 G^{oQV>_cͯbrh61>PJ0rL~ɧ7A)aC&B7J;ŷj5[)rj[ sZ+chli, !DilC):lnsBS0 f֛?1?[^Gez}2b/~3>; o%B;FYץ=^^]Iz1?I;ڻ$Z޼IiDO~!ƴ/?G2p&+G%x9kDRy h<2xIY*Wɧ"_)d4Zhj5˭p 04̚9OIa?y_|GO>j\Zr}k۞B@yI+ɴ,ĉ+ɴ+i+ɴCi*ɴCi԰*@J20V9xRɶ2~͝rF fB{rـe8ҁ3 ncF0--j{vnPا MMy(&[6,LlX<˟(wjSSI<)٧W+QMd^jp\z|-0_Ko,.ute1LEՒNd[`>r_\_aP7+jKQeF tZD?J( ,mY i1kx?UY_c;y/OQ#&<ώ_t;Hc0 yvMed oiTc4) B;9P|l~W[Y.)}SUeS'U69ךvAON9_lMSP0&-Vl8ȣlQA[;xR@Na7 o4ѓCPf:PFsMw]Y4y_jpJd<ؐdֳ",숖gGg(tQƥI"\mR+D T "JNF:KLg )}SϾr_9a_Xd6 Gخc[aWR[u~t_b_@!*Prbu"73,>^ߪ+øZz)fw:(1s=ȍ dmtQۓu{@E7 Z!moא#d.$3ŎhOFS3bӴvytkCW%6tB;tutU" Ulҡ+˞~V'h "ʚ|Jr&DW8%I"\R+D+࣫r<rthUW[?+UB6HE) ǴA0Kio惂܃N(@x(lp%-'˘z M(R2EರєY%r[0w)zߕ+WE$/?X WiFJƥ`ȁ9'l&3ZggQBq1ULxsA^/**CjbCҲ0V1&Q2!]DOVe/^u OwwlC :}Uץ"Kӛ 7=&!?  2E~r/ @&&U0E `@A;J4(d2Wa:4H')<(om. B  (`9-C!LAe$yG/B\9I 'J́;V2 ΉBByC$ E5QZC UU< @nDi 뤅3oMgrd5jՈR8Y=ɪfL߮mέLJI̅c/_.={jf;yuO8.f5sYV*6P2O?j UTZR~-{ꁋe2 O7o.^JI0U y~1J˼ro.[Ny][MDM3$XL`ã8l0IҠQ䏘mMsPM5Q Q:>^h-vS8pN?Έmu^RdNZ}cԢE<닧t$@K#d ԘnvKi]4P|L|p$297 9{jj(, $'DWO"\RVdDtut%a+̥N.d*UDYKz8ЕTT3]!`$d**h'R1BHWgHWjKAϜK@w:&>-xa,-@Iȭr>PR:/h&3P+p!_J."KJ1닀 M!\Nr6tQ¸zOk`,,DBW<]!JQKO>h?C叾h $#-㈒G_]ɆC//Y'+o۴O 'Zm *lAWrCjR&DWX( ]!\Y*tJ:t(5 Q)5?r bvVA]}Td(RPxM?-:Q4ceTYU_,G`Q  uYn 9Ks,څbZ#S]jwF==@O Z!ׅ gԲ,NAKWqM!B&;Q褦 Ĩum( 1-m%Z 0uBS@2Sl)Oer9)vD9[Y!HBtz+Ysw*9Ľv]E m`D& NWЎt,t% 1:VIbd*"BUDHWgHWpTttN!\ML*tUD)HWHW 8]i]#K"\:,2{<^mge:Yȩˠ"˕,\ Z7BxЕ9R悁2Еnx4ZбM.ZV+թa t\BWmST3+QvT>27Z޶ٶ#Kg6ꂔz)l_\[-/פ I^9yݷ+:lgAp-JX,S+KlV˶/J/q-dF ]9t-6uM.J9(xK+kr"k;]9}tut3ۥƋ䃫;sJhӕC)DOWgHWPG QIޝ:]9ZJz:C4Ť9`ĥ*4OX[!D8R_>G^&o5CZۧ׋)@3!"VLq)P !LFG E3/4;=`]fGYу:=8=z D>z8h.lr T3tZ`?w( ʒ.;yE6]Mv_q(Uǡ+{UӼt=n=M_? ߺ&c%ч3{]DLH`@7_S8`Mo烋 @χ6ە er+\2l2NU=@- vii;D;8Gm'[RbOHVΆfq9,v^x^Hc@[qމ>="rF`}_֘b€ދw0l܀yt.4>?ڪU~hCW)#ٞNQ=gr6v\!BW-ȶӕCTOWgHWY)M3tEpˮЕC tPi筧J0mDk"vif^8p^+)"h軙uYnf8zͶ`njsCv*2vp~-gz-wm>DQK;XKљ% Yb;MG/p "]9؝ \Jz:CRxk՝ װ50Β@iѥVl8]洝Zӕ%z:CBZgIwf_ݵԅYwKCҍ~I{th&z~#lG'uXB -d N6H!]ܸpqH7 |x^??J7rǰ`2 9%<ȖޯY[",,~DqL^4wNʻL:5mj^a\f4YqpCbe8N cm_I{'|BzD~OnUl6%G O>_n⿑|3܆/*c B{x УPAe@MRza~1O/c}A XO5Gwսמ۾lT[R2(ЯG!c8:NO~6JWD zjrܫ*__½wA` o"` M3Q%Z.zrex!2:2R a Mhb@cl8IĂr&b2䜙c9A* !x=DR_7ukdQrYz/X2tC]aF_`uCa|z-e/;rcYtijLى &XVo}U1n0b?jR~iE 4ee&]MמX,ӌym/P6~L7)[\y6ްAj]i:KqXMk~%ILgяq ĥ3ˬAl_.sYdP2Eq:F- ~`DDi ad#X$#%`ߏLI!y`s0Ҕݶp/z3ft 6lj5Mrr_;8w`=G= 118X4oN{j5LW=nst 38HtDW?bdfL\w5jΗxF0O tܔLaps-&,5[%_zolP%^lAbg<ׁmٟJ) #%(б YD #>Fk;k+Eb2+\ê1F'—g eCB I!"_3 ď!XF"qaa=O>ccF! IEƷd!\K&+.!c!`bK5G&?|FցdyṔz?B]fZoWMSz9x2!9ގ;9h. Ƈ +'lL'zg ]mAiIZ f|ƏL$0i]'KiV3EZ^jPi\2Y՛iw (2^3ƢAݙu"Ⱦffgjf,PYPILi9[51OG" Q*I")A#nC,c Q#Q+G: ZC9R 0Sr\m7H#o2 rz_#B_Ñ;bĻϞLO٥$!(|a&&ex-z5EFhGOxMcH 7ZzMHƉ:"8Dfjh}Ȅq7Ҵ؂R}0T6=O{m;5Vݔ`U\yR.;Hd:-(n+.\K-KX9krUnf2!nrU`'n>?#U0| ;RQ6Q:/BP5SL/ h~ AO"z$X/ǘV hE&3VULkSS+.\Bb-N7銻ҒYU+J-ĖPUUZ1iݷ-YchTh&<5D3TwjKgePEkRmMi0"i'ȭ$`R*ߗEaX6˚VWDùTmp /Xi_ЇhuBY@F@ʛnKW?(Z8ج;,Q`-$1.d)V>@}:==Qc;j>ʄ跚C$5 !\n'$Dg 5y;x+}iDpyn { }qzsupCNlLv^굓 N[Kga7WO%$"5\Fr/8\|SSc H.ׂyo KϮ8jֿ7?&^tLmtU2Is:3:34hx!腘ܽp/uR[ɷo^1AwmG{gZ{v^Cj&{8 ݼQ!r .[I4( rTXrdh8s8" K6ɫ$cu'ݽ@P^B;Ni( { ;+K+|g0: {ܼ*QɸHRKC%E6JV|OqѦM*_&2S%r;6,0 9K?Yz8]\G$[-z_ڌW4qEp(ʅ*oh79T)F7p;a ,)wJ6}.3:d]>K*[K%w ).ÇݭJFu$mpqBoBp;K-bNJ`R" su aRp 72zȾ>6$zUvZXMGd>[ڵsQ VZ(!JIã%A}l'>wtn5T(qslz"k^x;Rv~Z5T}~B'O ?:r{QSrRV&D4;TCuN>WH ;5;BWƟR3 #P~zàÊTV?jϯ+_ˇ}cMwÌW&IJe.'>PP*Rz_ ~_f?Ms{˸؃${-^'lii;{]+ ,K1vENm^ۓ[ 8/F)( !hn/>y(Qj|H/nj5|WCY|p>L,Z\IPdzq6EUyȇ>.\PUAR ե2փ:$3w7`RWKX)c[Y-{beS ]%<54]aކ&r8D_pI!u/\c}. {5=fPoni?=`2m,ǰ6Ւ*z͌7҃@ IAfZ3[e-?j{9Don'tM*{ moOۡH&˓b9ZLAW":y«}#{fSayn%f >{m$f2^g^WP*Vztb;pq|n|#ōW֞&χ".X(y8O]d_.dѾsQ ka*WU%](<(R"*bRXeMW pl>u.&@kLy,k/fe+%J9|=vEpN_:gj^$:{*jg/<$M0<ѯdލWka$`9Y0 NGYۧA~2tS`V: *zJu`z/񠋳`-_\Ik/^j8]|௽zԂ2^L Ko۸@ B,C.j#wV\#^a<5AeR/&O?|E=ϛ /H`2O(R &MFC2!zM$Q7&{8+KY ScXJ}Ia:aŬdݯ'Q!ژZ1n5ՔT-/Mň\'C$%$N$rr NʏF_sVJB+yƩ˯8ĈZ-H"p\{}2i1KҚ9"TRxywCjvT#-i-:#ORdsolOعBZnNr dq}.)A!W h["p@zf%%ur4rækM#g_lV+]}1NJ)WSoe9+ND4CR*dP0YBȡa\'f8.65 Vdi-gu D5TNݷr?n "lmS?֐<{/Jq(m#@c1#86q_]FKyMtҳ=i*R+̭%%Pr!Dsx=\&<%<)*܅}vs#k&.l re[ h[Snv\ EBFP#gfRT<#L!Nspw 5zb 9A슸 Cu 6Y .nВRzZ!LZ6@#)p'w`,&Y%Q5P3 g:=˻y}^g8] as"ǥزaG wU4\KE!(9VDLA(lPYAe&d/m *ڽXh5qGVhcঃ#^;sɛװǭtG c^?R躵lG2.޹ nvt_B>Z!h-UPӨH xX R4qCU+i1\j#{2GDGFIr)t@^A m@K[L}h3~*-u9*Z$6;9-㒋}\,'\Y@gmZGQg%h#5q/i5G)3~'NJ CyDP=9BW3[rq$%7|^&sybBIhү@"U:HB؁zevF),C0gU+)vp"Fs\eP3 ,VPΫԲbJmP9oom7uEtIs&wҧ"TQ];jIjwppK:ŕd2+,FB>0QpzE=n轢(,-u04VFhHCt0h1CYh1=@n)z hCl+NЯ/ͩL # GL4wa ~ kSf+rv'[qJ:Vn8= 3ްf{ ٕP#<Ƥ %yanIukPYs^j/?NKmoI~Zә(6kxtCP $'H QJ2;VEaMxD]/]Ri"|dQ>B) ޫ)0t]b.]nNPP\ AJư[E ؏fn3* x0{[Lc}yq!b(͏2Ɩ5š% FVl_h`]PZZǿ~B+31iN__z d5k5|̿/<Jobv5ʫRBAJ'1 #q韺>'qۧL9#-ċ'1# JKVJsTjO$V5h&TJ-/dZ:{6V԰2BrL5m,G 콰'_ O>¹#"bbxbQ]}[ ٷe25c}%nfGko٦[mr턨qsr㛭{ ~̴pȨѮG~GƂkBmYSzj!taY~y;?|d(suG@)ɯCs]3's '/;mΞo%6}[[~>$߾Ӳ%oSuyPr !;?= Iҁ-zYMjQPgo1:\pܑ4WP<6jATݓPա(Tkiʿ׬ח}ڇF{st GYR-\LuGL: 6Mocl̩Q޲O Gg61$kHf $@)Uv5|׋3 nߊƍja 1F !'5uu¨ԆB@,@]K%.)QP}ZxCDk5Ae@8\kiI :_;|<,=0aP] Gb_PHO+_0Mmv\)O^Z~z>Pv*~*N3M=aXX#PSJ_NF;1gä9FqTHzh rok GQxT<рBK{MB3. R˶s)A'hKKfiP?gF2 A';mP)$OK{:g%_f|[*Hq ;7GWyO]#Gp%wZ{tᦐ|=qw%M rL:?dᲹ!$JQu~۱-SB2DhOpB[SDzd%gՈi:Pt[W!9{UEijٝ.LM%=5_mPdmΦHdD|8v/wC$tmX{JmD*ͦ`2~dvd {{G9;~z;lıBLL!3GǙ@:1REi=M1\>)lm\ɝYd=m6FPyXvv&0bK2dzy-r61>~:u ٪W@i0Tq5iߖG;'Qv/DYfOi2Qtd *{磤dxnā{=;,/^0?b|< ~OOۗO(0*pVpF,`:h -O͗Oo_>< 2ySJ/nTO|6g7{m]K_鸙ӹ\ l<ׯoda!4+ lΕ$=Y=CX^_aߦ& zJ/ch3$@!o\VASPv%妑KĒUp?ow <Lx,'o2;bMQ([FTC2ZHBb*SUnjRߗ0=׈5:vUu!=K(Ȋ;~Ӈҵ"$k__b6 UyIyJqq6+ݘ}yvRvm̔{fONP+,t-}s._p+z;^}'vH#9W'Bv:ٷcx5@60Qp[YYQR멥v>kqU]C)p% g!Ƃs@'4()&X$`TaL 탽gz&*u"yKql/*D CIUSvT-6C_^>e 9mD"33灉d9A>ӆ, zTE+ ^3^jKMkp Gk{=Iיr ^oUDJi z?|f JNI뱢fю)Vx)dš*KD1n^[R / }Ǵc>jxD`p(Rm9D D_}^7N3]ܗB-liTSkcd[Q]]<15Ν AHg00@@RM PbR [t'mrN4Q@R`4%SvE~ݼGH冢݀XX5I-Vw4jq< 4gB^vT'(Fď/YYN<\&m({`(d׊a~+k8YDZxc@}E F?˒ M%& _S4B(܋u-Y󦌑{w6I4um=t(U*RصVtT>ƔA!;g(a@ؑhK(L)qعʢ, 3^-{oJ!\uv&~SoD><=mBZܸxf6B"eZrZcB6eZI}5|6IN4.cSh@GY:y`)%NB: 8 FHPA㭕˩4!_K 2njۛ*M`HuYHl0BI|M: )H/C:;ѴRh&0YT̳  Ô cK1z/#3J[ε ih hzTؠfOL aP*(w`|w^)j* TL4( d6- E<`(|y+sEpϭ@[o*_,KM$G+cу`ڙA]^xkKNI8QqAJؤ8lI쐸yb#NqzĚCYKDOA3cFx%_uhh>|>:E C< 9q+ʉ?F߇+]>s-}a)U4t”[v3\dϿye4 º[hmԟu7l%v Junq,*n uNE%&80L()4g4Jwd83Z .Nxa&r#M%ߍն̋ Pz[LQ'U*C d Gضnڀc)/BwX :MA1+Av CvMfhֱt"9=_QT騒~7koKݴz\잰[ Rbqy$Ip@KN8:},_ L\mkPጠ8c7L[Fi 6`G0fC$ݰty T4DY ^u^:J6G}32U)uh΋JėG!KɗJ`㦎Xi:MT>C3+ݛu`\| 't&qv)VF-M%$u0g!Z9E(xHh޼鎺l14YEyp<(wK!w*b7OF"H"\3H(lRTW݁pK\ڸ󭌊ln˲/ık7jNvߦyݗ,zi,!N[ ~R# ٨f qeXZ%Tbn&["\F[-)S m>B#%>aڧ:]($"捎ezDd?弦ԏ[PVG +vZO ;*)5C3i$c߁!/VŒ ˎ]/~k{w.25/&/g2']5eQz2/P{WZ*َC usBX*5Tx$o5NUEjMywA{VY?=z'YdTKL&(((x)q:灿־3=0۲@|n $m(#irVZYq5Sh'h) PZ8w Dno O7s0B4zfO(2dK_@Trq°C_j̱8'?I\k]PagJzI_!dcؗ0clxOURqS,I_P\RzɌHRХ%;^fX}oM !͍!I1wҤ3罚Bԕq7}}fIF'ogPNF_u% )g +m4n10}Cü,=B'pϪ,&VH*(X&k򿶙Wj%V 1⻘^I!j4:vygjNo. V9:,Tn:vXNڹV ld\HeFز>ޟanZڸXVnY7Q7-(WQ\{r\.uw 㳻{gV02[L\ekJ)rUd SL3NRiU:)+fIwӖAtA&,'e,eN є Eigs,|7b]! t.dQTF82M $U;,oAOd Vف攄-.P/XrA!Mk83G@P柖1,}ђ(gX^TbEFE܍m(ԃQä$:\@kGu)!#CJ=Cz{M!p  la6ȸ p! 0+x@l6'Y#*%S̴͌R غۜPՑOMRi0.(|Y/M7o.ϨH@D3v{=35dNqt֐3n )Eb&- eA2ofroHfs!({33u#N>KMvwP 2ᴨ}WEWu1m>Z"?=1Y\etF!Ż2\ţ׃DW2WҺlr-CKΞ@8P0I == )NSJX)w}Vʤr I>[]Z@$dw)cuX]FsEю^<`)c7DYQL(AeDb׳i[vVuo>ۃ(3=nglK*[ӭ;-(L8ɜ g!-H IWe:aJ4>_)ŘHlhm)RkVv B 2R-{b-(F@x 땔EfC琀*ɄRұ(5!J9^.᧽ o1+g!_X0;vJ6{b-WPP{BD!md ʓd PrOXi2@D5H4ȸ~ ;al1~+ ch)> [hӷP4..JzmgVJH)8NܘUD#L=w&F+{r>)6ӯREQ%<.a&<W꺦,7y"^ B\Ap=?t)Bo%ʸpu(F~vxaE!_xӹm[Dl}d&S4yUfyZ,p[3!+˜|ʺ;t9IëJA?upᒅFyY~}<EW\Ob ?(WM?JWai-?(|rhn*IMh E:}'*jWpuPg mXo`(RD߷&FTL]3,WUĊ4Kx81VٯxKt,ne)XJle0"ߌ{uHL7|t4_a;oT~S c.ifS" əDH0Q[fԗ?t蟌3믤/7of\_W5Rr& ϙ\K|GuºS@Cc /:dO\-PAEn4Ϭf5ה۸(t9PEK$VTAB79=kh'K:EyC*EI;§8ړo5*}7N13#qY;uL"Kdӡ)ُ֑ q>AAU[Dux!SѾodVoiϣ"Ҩה%9X9K[Et",AЏQG8K!i{$pJ'6gt*<)& uE>^؇;{>E z0:z9"p(ZaFJCtN}908\.EPZsI_8ӆ\0z.tתHF FdU!j#}o+EЀ >:DW#z\`bd=^)/+0REEgvֈH ߸ (w=v2|s"q']`68@ Yf#}qƂ;#WMJ-Gb▪CZҏԏݢƠ+%<{˱kȊ:#JRvȻTpPd3X0Qs}mQ=%RVZP8d *k]H .$8 Vne$d2pHk>*8(UpgIf,,~UVLbc,9,u{פX&w']O>cD6OVߗWKLOߣORO{=u!"J<wU|'U3#$-u4vA([dpX*#=մMSpSa;`6Lw`'y>.C 59FcCt9ܺAPw\vu!R ; iȖ%(C$^|!&Tr\FOI_9ևۤEc4deL>0> }C(B%b]P`.>|Z&ՍO ధ kj5شaε<}&J"uxrWN=6 L(P3M-tP= 281\(`T<7umo` `l2-~*s09LÃ3̵îp6ݭDE.SARq# f3ҳ?7v;MŸ\um`U"SP%MHciU/8-劬Bݍݰ^]vS{% AFq)U =2IչmŞZr`+d Hzklb ב v)}8 Yك"(|m&y8zZ -J.?2*XII]'ʢnk_.db< .q&˅I+0n-!OuU$eQ0LlV4h.~$Jl(`F|olƽ:$|{o7:vڱΥ`U}VKUo(jbXFaL[V iϲA&*fySN-̉54_*wRNXG#}k-V|̺ݳՕǩk~J/VUB щX)Ēa'D1:$!b:{!v᧟ }z#:r~(ye#wl #AW)%cN{FѶ䞇I&aSPNv<.{IU-jfhAj]76T[j#C 8zbx@3Rsbjwl!Kc0䶜Z]D۸) A}1rB;j'6Pc֊PŋG<ݡzB0FxbrI==w{\”MX$ bS7 jZ4RQSF{L D8a\y#< *ԉͰ  Ur)S6),8\$+=3v˾yͭ^@؛L\oWFv']0I)d  Ppy{J{oGkC1zjUz0vIw?~\Nma07/r9k &?fw?뷿O"/q.+p__gF2poƫ展j6LTf&yfAY&iR2)g`UȽ:kecFt\e8< dAƠmK#@G)Brr;=XقS cXK jY8klSb*aL;W9SHYfgsmuypA]VX#^@c%^,1ޛ͋2ոKVq9^U{tN\q{ӔV٤0J{VdT`,w{RT.dϮQ8E ®OEwVy1AWOkB:E-O#Q 9ൂxWEbbs +,BzHޝPpDrW* cYq ;1TIA7hAL0<4`| $-OfHjJΊ̗ b9xau7RB9:}P>KCrS=Q,쓴XRq""3s} U##Y =)N\壵~֓p<+yQr+6dMoWn901%__|N)?翕?F`Meb1_<A+yFr5oGke)V2hUߔCӴZ-U ko[8~]U<*f0^ fFr`Gjv {̭0Y(XY  8IW7(UYnMWDBN:FD) Y|zk c ކ jYSW0#M~zF=y/wbݩ?gOPY0*%6+z#cDɳ(V5?-֩-̇ YJs$Q֊G.~?>焙W i3[?_W5\ݚ}&#a+*mQц*hqFQB;S=2GEDhFZczGd 9^f'2߫̍Lx/:GSAю,aVa#RN, 쉵(P} 2Xg1|#fw$w-{i"3¡Ykx z#fquh΀"?{t(7['RTq,e1S}R*ڸ[.upAƧFr=V@?xl[z s Ir ŷIHLfԺi$t`#%(@ @0EJƷz;Ž`eek|[OFm7/evm^)IrL;gasQ f#Yknڼ`Ś57m! wl>5|Odo؆>+i)+XsT sV)Y4B |$eZTMf74 ,i/һ)T@ km|XE\gF’L'`$:1H^(kI &Mxkwmm%zI6HZ ؗ J(Y#rl߷LR"eR Ë٧j9J@XyO.d˿WGJBD>;j-KQYT즆vtt"o#u{h]>KնƬҹTHEպ y(fBޘU*2}s*>c45BǮ:I6]No:[U~0ƏZj%ލh~F+emMW ch-BvX*Jᦵɧu@hPKO|' ;d0o2Q!b5(E`{6Y\g+wD=J-|$0cw2=?2>]+2.1(y 7UۏK25rC\>Yf?G#ٱɴRW-9z sUUn'<_~=4OkM3Ԛ `| -98-ڨk`v!V|*_Q ?/E>}$<#3:EE"(7ӯ-ŷSrl7_}zY?ߦ<ڪڣm5*j~ɗB*!r$F37E[B=&'l%̘G PtQ?"EK,b۹XΪRdhFSѪz(Tjm"$V`"EєȘk.63oqR@pa"\aA@o>zU.%|\Q1fijX, e9; xxldx3* bs"MBcoIB9W!"Q`r6>ͬU W-BʳDۉ6[NtҬ4Q, KNDZtu+6(0Zj=mDcXRT}|}^@(ɌxI̭1sT7fly+ܼKaU@Mm ගrEق)^Dm%(VL%Z&BpvF v a`\Kyc,5Tx1t~GF .ZDB!4]KmD,h1=/I J5 -]tK ;;wW%:`<i!Ǯy F@68pMץ1V„PX:^I)vs{'Xj?qೀrq iϿ Rh8wyfŢcWʹU]a6|UŠg^]19YMc2U6v32>U@T؎vcuK4Ot:v O;4fsh(RR2n9Xh2T5 ͮzuN͕,O6,auj0C;̝v ., g9,V حy csr>~^nhuy@Oϖش|IlFm|iLf56:`ɟ֦;Lג'bM)v2:Tp (FfNp}ncP0q<0}8'o5y C6>gɃ&2﯃k+oKBE!◫Ih=FȢ,3FWX1.#X\krC&1,j(-;ճO>q=؜maa }KgDyŰۜ^\F/6BIdr,' ïCmDpCЯ6Myr?ci5=oO! .!dƶ[}KlKJ jJMkX0YƠK 9[= 1 _7$ɗ W qՀnQxV5T=8{;򰨵ۣ{"W)/PΆ<4^BZH9P0\ 2m͡*w9ZB&%@N, {zڎq$'ax^ȁ:; n D'ZlXd[Ywɒw<^q$Ȍ5/e$/$?ezg8g"ţOqOoHMWoz<_nND2__X4?P2qe[/m2F sDC@zA5m'j4P-{3WK`L j=k/sgZ~8j6(::_fqkwVG_f1+ӻѻ2xE$arD|sizXqL$Sq'NV'kuފ!IY6:V qVCuo}-o .j^6:ujv(%YkFv7 hFg@"9I4ZKh\*(R魃DsQ:.c4g欳o21G62.Ww<KPhmh ؚd[TMQYtIV֬zd(OmJ՝tV'b]}@~*^h O{H ]ucVe%ͤ-N<>JARVUbͤ|jr5WRA8aԂulFR:hx8Czl=͓?bw:$"\6C*^]]\b@@Q !NC+Zf  @䴊\n"j!o)Psm-Xc=PJ : Yާ?̐ AlMΘj)C&E:yf2PQ&!)몱ȫR5Yˮd&+\tC$ bcms5 !a7:oV&{d!Z |6} VP?_v5Jmyۻ7})rSSX7Y%VpE54DQ -M>hV (p cغ  [5ul[Zl.=)EC2^O3@Q|o-\95ِfmw7zG?[7]AȽx)JJJĦB%A4l6^GԂ*vLI1c++9; 2k9至z[c/S}j7ƖO.^+C`a:d;­q'VOL\DQi B#pBL)8ofoc:Vd#$W NBNQx~9K46} 5/N;o<8|[΍4ߢ]!Mϸ2nLXc_DA5Dh kf Z7ppx05<%"c$5cU7}bYU_0 G(̆bk̀ƦâXy;)a;,ŔS EAr JmK m5vY#=6gMB LIALNhRW=c@,\*Y{UԒ/M\KiTʰ3p;)J򺄎[q(g$7o⻸A w66u\A{/0rۯov~Vd҂ k>B:)G@=aW[x/:{6@MٶSBI^vI:/|\Ė$qUzQcP}rG,:2TLÎ\)O b(dRkf؊W PbQ;iBQM3#nԆNQZrVOZjW`MwCR0`$fGlv{t;tD!Zc}koRy*CYߧ[}?# D|':su{-p.ځ cd {yTtñ5-T8\-l/#qW-(rG`V0Q\RVe_>qțM*}}ᒒWM*7+9Q371"z3HD-烫XisM@j(Q4*mEzUtlCqQIh}9> C1Ig!/㯻,䣟y[r  Qo;q:,q{.KrỤ_ih6s'h*%LRw ɷTFwTzw-PVŽ.XwDr@ΥXSj5\RЫI`HVh:WHFϩ\{Od!VWBTIEYZR+V,|mvd}-GѼHd@%g\g}z (A,F9T(B.*N\rVoLV)#մ鈽Ee|n+UIǾQt?)`*iKBAH`+֨f^EM/<>׽⻫c9*6*]m f19`PZ3 t8zZ=F&L XF>-ŪvMp8 c _.ciQiF>Y(l*f6K`M@ݶV6QC2ZzP*6Moj=(/Z6ikѿ1X@W&goӒG\kL3f;K W3eCT xZ)J/(JyV|(zdD]q < lF#V;QyfG)щ.}S*o-;y㨯_3 Cq|qC<"|?]uo;Ãh\ᱚL|>9Qqx:ƟϜӳ=Ǚ$zxg4Kj)@B _-ǟ79L cpp"[=Q-Yyv`$?`n]3,yۑe(XAxo FK_ xLF]j>{ԏi ڣ׏h or|LzͷF[5?$ĩ~L>{_uyEțuV!b|DV>c@ř=>ea1czZ0tGg%' *SU)-1VwCC+^s򡉮FVaOv}it1|hFL8.`֟WeUDkǨG)[w^q8ه0x* Z'@uRe`LI7ږEbX7cBE:{7 tc`n<zV`5 n}{@Փvsލ9oJ9`5mй5qrEv⍂]'wdG#!yQ`W`%{hW`DnZ#@B.@(6,[3Vd7)Fʱ];m|v+,8!xBڭч#zUb{G-dschd(6wb?&[<&<1)|ѠЌf4(4AnP8K[-@MiI;˾C*Xo&R>Tb2keVut/|Ҭbr{ Hƌ&VF_5gn mIu>?.üKodHUy*Z7&#52(q ۦ_).RE(5j@#UX[s5Gxp)c9 u_ 3+J5!A+ҪUXݓ*ѕBkgO6//r?|Y2JqcΙ#{IYsjJB'Y}Ek1R%2{fDڝ%%KkJZLhQ;%\\fL R`35[۲ Tm-H6;zO$Rm Po8@٦a &+v"RY*%bZ/.(_&e(O$ ٩Ȳ6KbջuEl۰zׯ{˜CtnzBkFsG1QpTx,Qh2]ˋλlV}q^}rm糏7ysvWΎl7f(ɋ ϧmmdMP?\@L[ȨWfq_wxr} hÃ}:_<<V# rEKQ$ԒLc9(G&[srՕƔR( ,vWp(D)LvtZrh*uc|LcC˷co]cm n%2sz"r j~,8da9rh8{r.qH7w|e)B7]Rr-Fp $~I՚JzRug3Ku%T! 0 Uđ(XDWf9ANY4'[IZ035лZ@rW}"}!4JsqF˪jav ]%hoxu{ɅQ̗~rygcыWMO (]yKP ,DVBn]2M%QQzp "²unQu0..a,-p{hҺ-# Y ks(Zۗ~Mp)E!~lT&b.my^NQp]>CRX>ᘦ[ͤ[5[$q1$}8y-U*QgCYF. "ɰ,Bs9sQPPFNNxX`xF{p5(rȹRsYDYw[> U $[q&4q׊]՛N1<8o>7?QYr7}!GڝȚ{Dr/<& ku~~_8ٜM ȮP"@(pKQ2D]=6QhH7Ez9',0H8SGca ddzԶ<"C9{(gÑ*{(t,G{HLkиb z,Iwvw^ k3J.!Ug?8!2`e^\")߈،>5UTYLjd\w.9.4rv̎4sFs5E2섆 1Vv,gȨPB'Ff~V3hzb` &8aP~~ Y[q/jмc7 J&GQvz&<(}iV26=JdڛY,-=iv9HSV&}ir^2_~9z4Sê^%D.ǿqDES{b#}h&`zmwY>J'NbEeE>Ff6 J#0w 3e`-m޵i'AI4s8`N, sBg)C=5yF)v?G|s{}6ri|ŨzrN]V. 5<(( =Pc*  r-9dy02$_`j}MoD<_> w?I+-2t6ڛ}HFy$\ԉhR]+h132!1ȭ: P۳ X D_}D(QK! wʼniZ^lDp^n5G>r=EE2jJQm*՗5G0!ؿK-ڼ)h$J~B  pI XZpe '}) )K9MqwT9Ί11-Ļgceb-Ewa;ÅhE 1 (951yNYuTya@hxy cEyjߎC:h8[nl )1:Ӹp$-# gz0}Hpט?Aʹ=Yƛw?hxH2>;Ahd8(@ ]c^bv)RM"{8qM̟BEep}]lL_[ew4 w0~'Si xtt@aN\llp89vO`)oyټG3[H&B/|VC-qɾQ^O)44>3v0};K.1ds#fJnP8 F'B TrّYF 1(s`%Y*6{71J[+:oǷ?|b,0>_[^wQe JKuP0%xP':z5YfG\8(C`"@05lh@~N-)ؙ<#Nz#koO,oG5m-N#|#INjH\047 q1,7;ui,w1]?u]>?@]>ˊ"F&w!ec!|Ozͳ C3'K; Kc71ÕJ=On Q)XysQ!E8ic֍n\nӧ8l]A.ICʬ\1hBqэ.7Pkڥ8%A;'9/㙏ʬ۠ݽc&' K>an7hWۑ6?prGJ^f`/Z}Ahp7VAɋ D=q L!!C>vB{fNL*`)M?O~O۠ }$>$h'ha` mnӇLRǿsW1^M.wUj\UH:6R:~t*6O^~NUͤyR,_5x7HQf>Hlդ>_{:H7@`0\=*#?U& a*?SOmi~߶q7dOs_I) mT8_r _ڭꭡkҤζO 1{W{MspDt0g@ϵ}=hao;g$10:(Ğf?>C urS>#Sz(^s[ۇN8Is\˳;EkJ*>& އ#Tgm(Ϙ Bes0jRt薋f2bk;kIܰD "dE;\zOWV$BXHT;XI%&*K%-h Τꍞ\hݿzf##'Rk cC0h ZܐJ+Z)rC "ŘڑRL? ?Ar M)_;1>$cϝ5FOy&5 aE5PJ |j] r=iFXZ[Q#@$>#vzDl.p8Usa"/(Bq}`SlKkj]QVРBƱRZo'i%{dr=\کj/Qc4(I9|E, b3<fzFv%0 ORM D=*&깦TKsMI-J$<Ovݛ7dfV3+)FgQn#A ؉me BEl2b8̣,1pLl2UcF}COp'|ĐiB4fֱ1A~OFM)=xoĝt+|]5?;zѭ$%E!B*׍$͒Jw_0>yN:ɇOߏyE&J7X׼:CD\76|Ѕ{%y 5 {&O>t!*Rf1 8d@ogF[Ӟ X ;hѾBISk8a/Hsv=!5S=]l:ϔQW׊iCbz^Rj-!UP|#OQquX u/g/lw _+E%7eO~G KϳF '0mZ-6 ={ђrmvCVSC {^sJNxer[F4l$rV01:c9HM1 LT&XQ%:ΠVu bHikv_ "!Cg}Ưz;)nW7}/! 󇳋~3O\ R~DAڜ7OiDKuF15?˦l; T$ܒBjtpOړ!0Y*V`lk""g>cGOtqWn> e_y6ѺQp!Zg:B|tF+j,gE:1mp{4ӕiKm۳?wxK,b;>oR Jݐ# .VBv,)K=RMD UIWЂÚ< _~X6X7#IO;b!@%}<%J<4$%U&Y<ˀ-ΌȌ/N˪p:_$ywغ(Vy*a˸d!T\]6S,e:BR&:z;̑"14NM#O,L>벜* ץZYi2FM>'/M^e{Q JdO~y Mfg gIY*bR&JJg y)0'Lj,b\*ds>% [S,R*GF3cfx?P&Э~˜ mƀ <炕l"=)Zd$Vu.\QrPww.XN}j%_.y> +}߁ʇ쒒J#h="5!Cv{Hsjk3Mggs-(m_7%] ]QrlC4 'Gy䚪{+TQDR/)-&Q|cݽeᛝ8z_SlF|Belӈ eJֺhPv'!Y*Ӓ1o_oiFa ~rTͽ.!X&sc0 (9rIɾwp9 ;|}j5[7*zP"h]xeN>G,J(%e5:H 5jK!Va9$b4&ԑC-(vys,⼽//L43f{.N۳7޾6Q#JgJAD]cd4/hc ;m}Oͨ\db-llStB?,G mݓVz#ǐ[k֓8v|k!];owg4=Sꉅ؈5ɿ}7F^H$JE,BBXL L)v0"6\F2;Ct7ƈ.y6P%`={TX\q`PX|bg/0ȉlηoRҤ֭«F)(*c(x%j$-bֲEBə`vtJS$!J&\a! /(qyㆹ{ςQJFFW;.m_M^.y>9G:Tì*^(Nϒ4pv,n:y~6p"iO&^P?3k kNR]>Ȣɤnv1 e/C8 Z,o~3}k{1;Ner,ߵB? *OgoM- <=#hɍ:ѧry]w+!GЖ?ڍ턍e;g8uˍ[[ aҔ67TSSGZKqjޱ]#tZM]M%xý2L9MzxK[iwZTsȳqkSPO5mhUW%{^ =Ty,k ]7@9c`D$oaDOV,XkqQ浪}0ܛ M|- mݎ8\>lld3Oo݃#VM#LRЩ^v_9ڭx }8&̊䉐fe߾篷1iޮY;*;5ZQeQLbhL.WUMKKOuCڴC Ǯ&wOEe33 cQυ-rWNlެ9 o,z7"ir]G0Tq|v+tmrȼVNt8 FfSm}j6nn{Š/؊pѭ;r/;Rip@2QKI9kuR!Dt*lAgW#lc-{Hg UH][aYd-KoG~SɉOL(lGOnX$#TХ-0eGKh-l],|:]U>lQ.| goY2&p q5 aͳYQLJ Hȝ%?ef@X|d6ZcNX"`J!Ǭ[d#hM?'>fCYIE_|L|2HٖRIU:Us>Ha6[6=B냶|z3fz"fiak թ%%{>lw_o_%ďfx>HBp$L(l@)95Vu2|-۱5c?eK*P e@X}N3!ucɐٸ~ LgO#icIgvcKljLo*o"y`Le -7OEա")DOeSEP3Mz䗘!Bߣ.Y#Iak΂Eђj4EYE R ?b΄|*j[J[D(I.o֟vЅR9*Q竂X{B@T ʼnAG P(ID2aJycYX#?*&s^Sݏ^VX/i mw7&ؠ4D?[fi"IgHR=6vmSz5@J.yF2RG*{0ڬQE/l\%]HtcF벰Ӝj}MD0Uk¤][or+^`UWc: Xd؇˜Y<.]翧zDI9Q%j83]U]Uu] Jc6NjA\͸ *IW=wZN5Nٍ[JSR%+|'C6J6!'T2Rٜ$[Lj$50ek1XaYYޅ٧6H򠵥 Kc|RGTdT`+eSBfrF)hE+{*{HXj~1.1D$#,X=V4qL>?4d|JD@2a}u+18X`a:>X GXv5 YG%,6yl1&,,a a}fp?.gyŚFG6 8}VN#*3(%d(FXZ{jsc0mnS ws}hfڪa*dA/MY:(IpF,ugw$C*d1p84?grڴ7q֜ gw=Mt:%}9 \m ZJܞ3IQ?)<^}R.&7ogAq*#a/ Ʉ@r E(b+ FF%ш^i1N햿ߘcLv.*!Bu;2 c廽曱|BNJ7|w\m ia!۵3?vh>dsHfAdM0~m lDȾ|^.E`p&"QN_g9&ww33ռ5]\b-nzo%`ɾv_αK|Gɾh #12V,VFJt\H.ݖ(' H떒$)H3SFHҼ!]PʍxQ״J;~)Fl*T+FJL+Sj6^~+[6+V] eh;@27|;Pƀi$IPzc(sK{ZH GT>!-ՠ\ubT^ y(oGFJW( Gw\'ˆVs7([+M Cts7#MFiElO 8%E'7[Jܓ]d~4EY;м4ltH+ث^p !E=X@Fz׭ʌ |U}Xl J@&bБ`(bLTD5Jެ,9%v53+ZۼD2hl?Y5 -M_zI3~Ļùn*=)]H@QTYpƀ,Ns.i~+mΝޖiŦ=~:8ftv~JA8;9`x$-cjU-CD倪v9WlU=CV$T':U#wg#'E="YRsd^Us JV@ګJRv9'>A;+6y%Ïvvh#PAv[v4PuЦB%5La9F2 mߜvm{:N)gй>'j*'kf^h"봕B EԲehZ9GGd$MMĮHO1ʴ OTs "ӤְuL:7|lbR fux[0{-9b9x0)Q&)j(a* 0+7"#>La 8d~ 2E^`NQb> drַ& j}l R~}rY˷Q:0:d"1imiZr*:iH2|dXK`_BFVKS,9l20'h|A++<-KN2zKWp{#Դx΂di^T'הwk\ 8~kig%\/ (\<Mt?;*jF_v <)x]}_ܗvͯ+ͯ=gM``0dѱK+QnB№}Mݑ@1T<5ӯHH|#1#h2~sCtc%#nc3 jȭ >3E}5 XvfTlDIϷϖ!vۚg=v5WlDuMӇ5XS5Hk923Vs5s.{ّA 7:Ulwi[qFZ+veiaxBy:* jW::5ܫʬ u-8WMKE5s(hDuv*jXՇs0y97"&+koGOPkђJVR;GHB5̀^; Eɟ}BJ gb!d~+ ,$t7r0P1hܰ5a#9P갇Ў7|iWH)!i3;M2Fm(3b(dl)Z;3 rDŽGEf 5/Wz?~}9 y7,+HĦM vAWm-Z?5SŰbB9djGb/㗠u ]iﯠkL;9esf.|i6[mIr -uNcܶdci| >8;lB{..?D/xe4R[)xL&㤐4 yc~#a7g}9NN+ti adZfThjUctmj5En.fc4 s]UbԋenKc(gs:1Z} bXc@}8A d@5Q {=cB=e^P2HnW6:T_)z+:>wL->lʔW4CÿXPz,}(=>K{,o.&g F$AP26|-*`ܢ3eoP*$9!}+z?I4*>gůXAq]$~ F%_RTs{1eӍ/22 be2kS_K* *% A~WnWWIo;OBu_p;2̎Tꮭ}dLJ^aV@ß۶ \hP7hU_Ld${vZ;úA3iRه Y-DͰ='vyTarjHca[8c#[AJFxh?0#$[kH7 2x,:hq*$ɢ, g'-Ֆ&3:\0=c$Fh51Mb:DxKd''MLLG7 R~y'[]J >E * +HT柄RuU$SJiӶE|lBry-䓋XɃh<#v2 Mwp[^;bYƒ16Kb3:HB{Oa%Ue쵌>j`Rʧ#N)M2Bhj#2ڊkspTt|*jA$œlF泌LZ|lbR,i135fZC7*޷mBc] TI6.3_AfMbĦk)k?׼^&vdch$udQOR}t^u)ђTͫޕj:nIS3-,7∍e'KdOk⯶r~׵ -]fÛN(|2Konj3ͻ |9GhcяYc N;y? mt~_ ={޾_oUwemt9]ؽ٧7o~G6Lf`rN'1-G=bOXxC'gOxOIoe'˱/~J+^ #//rKݴ2PZ~(yEJ+ K":!1 9)ȪJ)L(Ny^@VGr&GMҢUZ4eV$rgZ{8_!e/.U]]#r6; o)\Az컭#9!'HA0c@lOS=ۺiAkiMo-3s$0<彥Nqj{8yHBmCafRit,]ӫ\&Ze3Yd;clutEZL5 @TՎ Pxiku"u5^fx֐ V C9s>(`Ckz`N\Hs4H>-L0#WiNWDɓ\*GuIԡdy(!;D#bZ֗$%PLH2!66iw.f)=i<[4 F@@ EPƩQ*t6=ޱI.xFh"c*k( )D8BL( 4*(\tFP3am +iz_9X9zEf#NЁVR挑8 σAjm,VST!b !UfA RƠ5=mИaPP $HťGP9!x|,n clC>mZF;ި Yd.2T>si k,uz2pזp!1s9Sr?"0dYؼ(l2 ɍi7w|pEOi7K㈑ȹGp2֣IYi7ky͂11,yG;SdLWjXJݣL(=,*t'2{+܎EE,gD'fj2Y'7ؠ(($x j;Lg j1ߡ0;j(KVNJ5'lAF K[b@wbvhlO١CwVoVK(J$jr )`3Mn)THP;#QEv-ot* wԎEन36E%jq&%{-90O>[L(Gǎ'H\KHwXjr%xA<03՘U /rgryˀ3G繑tFBK&tp(GuM?:6z<0ZC8+Q1 7fJdi>QEMUM੝$b FJ.QsFNh:13gq*$&&)=4'fR"vNPFS"v GbBs6ح2}jMR7UmӍFC[uH'R7 D܁ĽQ\NSXIj*!&'o5y 훂T#MzJ'bR=J gH]'jwNnA3,HiJFSJCvKT;ة!0vJ=VB2HkhXCIvv +šYV1s1L>ol'jwBnF D)^~|yw>ΰ}Qŷ2x[nW8M~|s:.#t>۳A??=7| ?뀉W/Fw!SJ:o=gb͗JIqJlV@ͼTVb+L;L%6{(lQ/lBT6Lϛ9 Nl6EH& EӜayU{֝po}]^{S 8›)znzv}[w1!W`LہeyhV E,;8n0xэ_\ u85hP.+$J|H~-݅g;vVʺk^P;6~S"߉ZmyHBAe(P{+G^]Ѩxo?P=N54+F)}}f(1 ~/.~| tol_K? ^MOzVpe6;))Bm\ & r:.qg,LuR e'*`D(`ZXYS@ڞ}\vP߻\ `-Q9Va&)s=FIն$B/r\$ɻؒCLq&&Ǐr*wuߺC&Ho>Ak\TR~*uSPgAYq"dR4:K :S<"S&V:? :__}lNux5ۋ^j*~2{o=x*񽙔::+]&ɳ;ߛ&f1/cGR7M?.\1Z⩻2̢sOx{ ]jvL,keUa;53SM'WaIc1kux=vi@-+m_gXe0ة DI.İńPge۾xW!> ( Oʇ-i&q^~W[ _mo/̋3kwQ$?3OVN8 " 9B+}8h>|whGAX X^ƀe X^ƀe=`9_g^"/hp`,41aRrMsx@!7:FkR*C!)*ȴ}P_ptqj;|:,Z>}35FBx֓`<)8pPADR4L [툑"'65l.<-X5` a=?Txm 1 fb2'=cTp0sv"k֣k78rM"D4bkEpix$hg{7i<S7Cʽ~Qf]!Ƽx \n\bqZ!OP0rmmaa;I`@'v^E>Gqe ʣu)įb׳oE˰@o8(x"s;E^  H(Rn:r?<#,E8dE笜8՟si!V&!<@4]ĢO ہ]B؂zב2;\z)7P x]QќdP #͵1"+fl.Us2}?~ $*_cY@FRжvmy-oHx{ 2hHU`Z/ucRqmZ@)-/KkZHHL9\](Acj&j<~*( xWrm5ZZiy|*+qE`]Ҁo&Axr l-sR*HJnvEȫ<^_c<nڟ˓{ޭdSLNANv!d()$z q=0'aTH$f41䷰kr Ҏ!Y1d-=Д ǐkLx$vI98 ,URW ,?_:,5k 6+uSڈP+ϽF pK9́+S5ƪ$a|TU"#, > Li)ycC;$J`;Z\*<8@V;EI']eھ"51ھo\aoJq+DzH]Q|ƞF'l"1ӽ_?ךs5x=`pAkWO?]`^ף7lVap[q`ꇪ۰bǦ,*URaZ o S0ǃs]炧9<ɹE3츨 Mܥ4`K& M޾%;"S0ҧap\\3* j `yUAt99#GrcoQDϐ$WfDJ؆`IS;Eخ z6瞂 x(fma5vMVcոȡLL; !=ZîicH&Zz5]yr+CY _-% e|eҙlĕ/(.;L®TҦb4,ͅ v:;})^j e3aϤV꒽gY-F*w9W],/O~K=׳܏)3ǿ^`3?̃>lÎ!+#yVLPo2Vz*A 2䴕=}]n0GadiSNkS?EqQT!5a` }vз~| ^Iőpа]1NC@T1T;$&}fKg_ gShdqCfa?8&3@Z}TzbYZ W [ XF '&39 ji&h]+,&MYO$"[*E&RV!I:- gs(۲T eBq0Eb?CHCҜ)VWS c1lxLBcVFbgկ#`zSuVֺEƢԣѤۙ}f+3PEf v:S\R+Qa޳'lef6./1VLs}A)6p{+m$uF; &P#)< R?JٟSԘ 5=Tc.Cu>Ȥ`",A@jv'zՂ:VPm `܂f=LH~BؿeњV %P>w. ,gS1 +S+CVqEԞQPuޥfpD ϧ;nRq1Kͤ ?ύP!jcj>|3 KBdS(]S'YKƮikAh1dxPL<ˌPN{+&K"\5 {6C9%Z*M2j]9MmSAG۬B|*T[\{&A4("ݸ?Gڮn,2臑`Ϟ9fn&l]}(f벫 pTMz fZ I7Ki-J"%X%*cwVi-Z 2=5 a$u-jmGpeԡSM*J"E)&U)VMYqERd&&.RnCRHǮ SQWt7{>w~3w}l?+# 3:77NEt[,iIKӳl2|1>&Vc QX!# 9ٶgHGPO&8ga:+D0,OKQxYC. \|6ί޹L⪿& A۾ng7BТ.>-Ok~dmi\--l!p $lŽ-E#$IR{l -E`)k;<_#^LMpd:8[mTx. ΠRA*coy8$M9#I$5[Ĵԡax E .s9u-"O`d}07e\ n>Go(M|-'D-uh.~ԯfTvw26f&7c{}6cW|mBY);k]՘ru ׹33u献$`u}j䢐>2zX~~P pzSU̔ SJcGgIc*gv#|Q6H$+YC!]ߺe~eF?z7[0\*få4*ͨ:ٛh78=ILJȢ*6 Hq7Re dښR"TeO)Y3r,։ %/{8wN.>ݗkSjq<-{ uź ?Kn%O.*APu省9éNv$LajcѶ5R\ndqx8[*aR\0 qwo7//gp6w}fq\,D8Urk&P`O`{8^xc?޶7V5J0y?"9?Tw(E9c$~@甝Dp(I#BSX !Ʋt*>O/5YItCdr/ I@ޅ~Pˤa̐8㬫A6lKA#ݓHb]́VID wT@enaH C`vA$RM`VIcT)nQ2WUQRI 2=aјq^"jH>s\;%ń@P{3jv꼽wVc%Va "-^ F聢퍢8A]7ZC Ei:TscQ.͌Ъ38Ȭyh9TBxYBٔTui?GPT nYuo5{>!YG)%3[-2FtZ b,996W݁?@(ƶ'̕gG׮tf,EwOIgx|s.~3M9lcɲёUe}Fa_FSڹwvޠu(3AUHkJ-V*=nU$qN4r@ Zc5D[-5.ޱwF[;eT3κxN a̦l]] #%z>fÉ>=xa$g ;Spp"msjt Etĭt5Sp跞GNo @#v>5;ZNWv%>}`'ox6ir%)% cT(]f⭯W]ܢKjRmN84G&kp\Fyj5W,\L15/b~?ޯCuHx Մ l|9x.C*l`3,XA3V(sfFpbaguyvs5z5nRt5 9#fvʧ|_Wy1cƃ94>Pj׎r^eqQ]|E9xhNY7_I>rNj=G)fwӅ5T"Ҋg*cg hClK[A #paa1u`F,sM@(y(FTC:jOw{a:wFf*-5%F  lKs Ts) 50V9B&9adQrYPXqiR&Zm.jA#JXYJR`%5J-,[is&3Y`ɑtPx,YS#ÉnetT)r xj(K&dL 1caa97BJĹ0m[ 3BۍEi(X$m ڲr{ɷJ湍 hKn{Ĭƶ0֖rT /,ˍC29%Ȋ!GZKR*:c^SvEODm3= J6mF t,9: -|kn^cR:WpNoFsɉ,:SxPKH $2]@q5ryv6gT5υUT6V>Y>΃4/(zK 4CN4z XR[{ 0rYaM (Bэ'1{=g.?K6ilR4J3hnKbY,U8 gR{@sFy`:HRa`E{roۂkf7?JWh^-'-緿;:y!Fݡ[bȦ#2rgelnå mʢ5x;Z1z͋zqM閥g9|y(MoěU7yE.\ݯ#L3>~`4?3oۯH^_xMOO|\1vR~gf?,\-xK>`ͻ,8o>0+,Řm=XsOktZvREtu6D͛Fj\4'$LW7忣-.ŦU T *|6Ȳ3ENvG_";^<8.;/݆xm4tL|Zr.%F>)=USОxYGPg:yVRoJ,+"`⮺q%xIC `!X ʭ7a#|+l@108F+82; ( ƪ "\2YWy+6`ru UQ)9_88!j̐&ϕF2۱ ߆ka eFKSKP^CD]@ '5݂GvRM֡xu6 VXxL*Ҧh ǣcd@.`A ' ) G flXo,z+l K50Y^9IZ/c;C1RY,cBWV DZh9dHL rbi2[2̰ y ӝXBkE׳,^Ջq&G|!xIU@BP{cEFd1@,’ȶc#h^1/iʄ3e)|5q{C81Ƀ(dF+= }hcR*Z9q;6@[}|m2nW_XbS&rIS_J"=ack;8WcRHD 0֒ܯI&DqÅ@(6DS-/8R~|Qf2ں[s2?-k P>;Eo'FfSSȝm.wM(SȰ$Aaq&ȝt bܑɝť5Fr`m-FܡXT/wPfСk.Z#zEO =_<>B9KA9;~)q*l`T؀Ld|Rׯ{XMy)ZkXMjQ}wV9x@>46TOUE֗"%ѾOܫwn6p)\#-J?ZX*jRC j0Yu;d5Ф&P=ƚJ5NMl.ޭ=8܄/}4t1XKgp}1rDYѶr[#S ޿~^}(m*UzxqsiF< ȊQ<NNƵ #\U(3O?i>ُyߚN_5ut>c摠x*ʏ:nIcb$}x<*20^:2%]1lS)ߨLIQ+vAٮٹ$ KVt79$y3V)R]=ꊆ4=dvFc?FOx85WfRw3ڻxD fbf˴~\~/ӻOwl mMZG ;FK1 P=p%QHD&k)׺qWǜw5'Re(31vL΁Sk-o5Q=2#I>Ɍ|SUʐzmLz5)o Cpe\Ok b BgôG\J{|KlH L0hme!o!^n:0F&|}LfNS}2QfPI`+tuoCn _Zs}gs#`% g9,}W*TOPba:d:I0%զr (w8c*ו#u H.Ctzjr:DVgwj2@Av9HF NuHLgI:>ӻ9 } i% UHh-H\.F MmLܠsF n4( Q }2{<ޅHYWe'Z\lĆpq\s&Z-K>HFa=`Xnz(:f 6`voBݣ/Z\89^dNjG͏ÝNjgoO:']﫸Zz[)}4eKh<<dD7I3Uv[/)JOs 3f)uߊJ˱(V'힠li{5uH[ժJ )핏z:$9zD@ [k"ga~.EoWo|]=6ʟ.$ny>٧FIl?5Znj$>9{4w pup{`Kdj\d*+Cnͅ;A 4mVlaCMݵtCWg|Hn'ޑ!+7I-U{ڄ> 'EIj-ժrCΆeUJi VދY+I]M:j 7-5O6P]vk ln4j4*l8:$c|ހPu>}ZFmO6zp_9a8ШFQ}a0\7w\)P| J W%ʼELK`\x0 XCf$ ` c FR͘ZכyOoCΫrnxkTȧȧY'tJK/r5Xh" BMyp02Wk&enA`m`?d9ex{3XM9l0v `;&vzO.q !ڤ>Lfo,~$dJ^׸ٱkzei?ueUt-9.nU*bL~h!_EHuÎCp5HI9n ;OyF֠}[a7Ը 22{wȷ~.L ' W6LMA:Q\ZnOgz؊ԮO%ٖ#*iIkHWJJ?c=r ڛ ]TEM+*Qd\lJ!ÏUD4Ř~WgQ-ExYU-.no.ޠg9RUAJa&h3zy g1i=RwmqzY,Zb ',7Ɂɒ5#'E)si iےpY_ŪbպMٱENޮ$-Z0 nSQuSJЙY޹/&Wx+Q:h2=TOⴂB:%Srܻm c}d`1y#$ڠ(,H XU ŵVU*1k-^,^1H3| 0!`"< '81BzV9'@gA9,S?7~ѬYi9nü¨ vF5RX/ypR8 aN!p~XѬbFcP!8-/(c @!9K] bil59$Q]u }. l5_e,|!]$>j Qj3Y-<6PhbGP Ż{ Ҹ3dxj3&|zҝ?_%=QTD+:p=n3yzed*hk )}LD%3gRHD'2Xދ1R||`_^ mǐTƷ1[Ja>_sԘB[ϝ3Dk1 9]2Y4~ ~ OLW4BkdGOFFGˆlmmݕjM L"%paj$^f3hTähhڗj8~z0N)  7l9s;ƩF* |Vj#d Jl  bG)w'\jgVhS\@_jZe3)g Q|PdOr\ ̰D5BQ#^۽B  , G0F(R$BRb9暁D@ fPdB+ڳQJ N`,!^c1FBo|y )X0TTȊDXㅊY &G拁W\iHOAGBJMőaBuS¥e#13 'l %b3n{_je&d;,뫓{~#:OS,,ΡM.ۯOq>mSp*X8w}wORxj_Ucv÷]14_Ծc=DЫp_X?~R~׮Z7}V2&My0V }+˱p^KO|m?+׊~ EFR74H[-9Sv<xtڭ\6|"Z%Ss8w֭4GnN;JOVڭ\6|"[BIy0zSB ƌU+*~^,5Mf49le ͠)w+>ˆx鳆Y(_BMxys w>A"ksi./.9"q Oa C%@h٧s3# ?F(&r|uh~ PpvN*›BS~r!}/b;0PKq$8rFV\ |*/q V9O`O<\.uަb vwtJ$&tzmTҜD™=)#hl)i \Ģ0, Ncβ$Ua-ZM>_!Z+(tA9'[6[@wh x)*LaZEfS"-:g4vg~mQvĨm13$F߾ne]|[uuފbʸu{?FP+%77Ȇ7ʂ"TcxVպ|t7m8pk2Z'y0h.Nc0hH8"1BԚ!(*r:K=]٬U*m"D+ӿ'>=1fE4hVSsuVqJSc*f/k;epMr.`B7wK0_'|vk0g<ͻɿ; T4~z5KQӾYuMփկnSg3*WPK9gc(Bz/TL78_8?<ցr%2VrP.Y"!24z$fo+țG?y0.30O#uxIms `F<='^M lVD- w5xg-onMTUafͩX{*ZIH=_,<0="^̯N#bku@>|C#4{Q'] ^ɯ p5x"5]~qO~]ɹCepTmK 9Ǫ Y?Cd"'×^=T_xJ:rFD"zj.@PfVT^@p[vx5ZN~q񎇍$%ox OX/pg,a~0 P&gs<3ӝ+sz\?LR_9}:.'hj9yRksTp(:q.MbC~=pib vTV"v l/s)u%Xj4`JŅ: e@[FcYkB"Dr +#1HA‹p1+Ν#c`=UFB 5Zͻ)9=bpu;8JjBG%^d!${mSF1$i<;2dgQֲS΢dSfQ(o,`#3{;g 0`)Ws#%bu\9a z ` P4(t'jnR9 +*,(35bqsֵUϚhyY&HQ!/M K0( %,`z®(#rء`bqU̚#D/u؈7˜$, J )BS5kl_z^i`4(*l86@<ОpX |~X= >U7\ݼ>%+$^.wS7'Zm,iss}} x4Th퓫RHqzY_~~#{RoximD[1R!(9,v8Hʝ,BBa?`h]ZJɞN#e ۇIb@2yQl`%D H?k Va,,q MVeNVݞgT?nyBgk0?J}=Vlk~Fq0c5WDS@ISf520 #'1R0Swo舫3yN$@5L, 5bL,:l%f3NQVpNnx9͍vjKIȸr52 {S7YCC7Fў*4؟M m#-UDO#q g;5qs%e80g0X4XjM$G сڤ D(Y䠮9ZT&,6ʫ_Fj<́<"zuYV~?&́:b˛^ 7)U]O'3SuϷ(NC JװXKьhiv|Dc(~La433c@LmAįI`tfvݠ1AIs59X$ܱ-UZmb9$1Ձĭ.tJuFz 9QkҊS2s {#Omfćoߗl3@uW&s m;Y~hh6ڤ+]A9.3zZrZLENݧm|YҏKJ9. GXOvy 8/NȈVyh%~9y}l"jY#)׾{~m+ڧW?n)X ƭnV?OI3FbR_CO&۟ٚ0uodu]1]Wy3$){bM## C7ZP*:my -{]QM>,6a|M*^eRG$Wʸ.g2]"QKIR4塼hE;wzRnPe[ u%@E+ӭf@B-G^u )#=WSu;\RcLM@/GJw9_jywze*=l<\w=EsQ2Y\_M,;1kȅSQsDK=D,\7|ZsSJURАpB 9ettKH+gij #UĪ`Ld,&d$`?_=9m1mXt?cE!"!4.;;m=>-ϡzؖ587m=ۀ˜VqntnI**J9sܖRI.h\"&I~˹7{Wvu2jg\ۙ1l 5fY+oFxXM U0[skZI_;ֺ%k4DFPYjU kDɗpp)4^p0 i.jfVX#`( 4*5YVU뿀L|yqOh$lY+M`")%Y`Y*EKI-a=EԸT3p㌁On\Ri̐q6^Q*k=}`d+ww۳D?&+͙Rm_wx{}urߋdu(rsq4h}}z{?o?'pҟ|^&[]5N_m۵ _Ծc=DЫp__,c1M[SsǙIUomG^\ݦ{+FnŲjL e.kYRZabb-~HW.U2%h?Vwmm#zY,fdbgpsNIf3X^I0TSDQEW+ &DwwW]ݬJ#1ƪ CVn)Y1ݲ,EU([vFw'e60G[FƩoi])!@(Q4FX1)̍Ta"Q.toHNe*^b%15]11_.bZtǴhW)֣zG|$u@|!RcdfQ;5a%/c9N,xDOVI)]:iR [%iR9.H/Ϣ{iN0E^]22.*1ɿ郁O(I?܌&_b2뉿l*d*sy }I"-TbpD<2\Oӷȥ3܌1A(&]&; DC3h~,H}ɼo,y(;&, ɺ=a$◳mf*%R ';睺bg8.((~E1֊(@ %5' E,S4SA_&fzofe\ǿn_>DO.WԺ͏-Ù;~|w~jy]vj t>V&9XRւGBZQI8\!cM(>  ~ҼOF_>9e)`T:Hbq?oO]>yUDOYmk=4>}CxaQFn%??Nlu{u E@;!E =BrHbMTCB@RDXV-JҭB`MIV%oʱ_v)?5*8#}{M쐒F~u%D7ZQ*moRxB%˙e'BJ0*)"~ $Bi8WKbI:FTЖ܆FdϪfBct.]ШZ8 ]KL dgnP#F;t|!bRÆ#S֔0RKPZlhdzG2 fІC'1eԕGYI_q^~O :ʙ= |{BphpMErw"-FNY˲R)OGurF[逌*WO &]qc팜\)j}T-= _EQMLf,SƈbfstLhv@Tjv9P*PRx].4|M[/®TKORъNJIH =ܯT4P,bWO20iT"zG|$j,d$'xi>%n< 濶-t iNR\0tߦT= ރc7. J_Jj.0)Zo1<#]ƣ[&\z#zLn"?Ȥ2:l|u&L=Ͼ, ̺dV|oJLt [5YWnl*_juU`.Ta`3;V^c'~= DYWbg ڇ*EҪ}4JI2(MoՏn:,+7mHX}{M ]^S!m@U'lqpHB3I2vil i $NU#<!~xu H*OlV %9_i3[ hT9JuyVbz.#w} P{~>"A, Jnt'3hRц -{պXL~eJD&02` 9%'K=ff(+1!?lf{ ~3+{NuoSQ̓ԋx/3I'i$w.KҚ6`S#)|?!LìA) X0 U}5:/UW;AՎ4!{9kս˿ZF;?ԵĘk^y?/w~ _ wּ28t&N( LEiKe;G4"c KQaQ%y:dAWE7|7vgmrfH\ bM%&E2|Kk#] k /cAW㪋t5+|#IVǰ̅ f䳈@r'݌|M\BqNۉrq"됀 w (l'dcH*#"NVq_LGp$/TU7z)Y٧X>;1nfVqZ)߆KLlOo!~HS[rˤ7rՇgd#<ު _ˇ7#qKl>Ou';3\&yџy?CcpNvӘb%]޷|V6X0U\4oz13mȜa ip͙.g0047ǰJϙ)D Fd6ᄏ}uyKOdI  D\A;).y^(ByO/TbAٟbҙ*8j/$љ^X\$Xm;Oޭ/krf?xnM-b(fu3|1lNҮ' PIw fe$|h1dʄә;pM Y]0<11CVfR#ꉌ_u5W)Ѫ7|ygrGi6) _dS#87.nΎج IĄc2 E+X.-L$%4f,2s3V8D!!Bu*6_"߻D̫h[R^Oc=Χۈ0v +5cYD] S;p`BX 8RLjq Siڗ ]Kgn8IѾ0cJbV/ƥZB cJ$jH㗆(SU>xѠ4>Fi[c"b&ƚI 8I,THbn CO) ιt%x7J먎c:ž d"D0JXll2FHŞ'5jLMu?*RgI{+_2/{h~$dcX4x<*)z5J3݉* pA&ULw Qgy/zs-~%,難Mfއtҭɓai ʃ&|)|GOulTIz"™L3_`  5] VEJ2SQJ8 u3^=l t5&~ìiSlͣ7 3y$>EOgYc,s@YUʥZkc5E3"H^Gb Ӄ2MyPU1+t@E{nSNWϗ뚜psFeOy0yV@CYgAeZsb%:d&"2oZ<TgQLj~`2;qhy!˴jW1~q`kE 'Piamb!Z$a;=':mcpnZS͋Ũo]hbJj UMM#T-ᒤ< 7 p#R C3p hSk}u6RpESpQúH+:\Ya!Xa!gj2_PuSg I# gr0^g8Vt&ó:DÙx_L<_R85Ti8KImw^9%~_.DPY^Ij(l3K77wO+ғ.'_jޯ@>Z0Fˉ!J'X\K~>&;5Z`' YEJQO td<8 .~ZRzT!MAS40 js d5Kq*rg|(֒\'<5GJ"0=8Da;#5hYMTg29 G39Nr ҟ[/f?e=+gtZX anjPJf0Ϋ3f}f/L^z.Z)lT_>fraz|i1^Ԭ7so3d#"n)"m^*k}~4~sAz3~;`&D=l-'9Z/> -zr>p!G0C]NҿY^~ɈJ3E -- zm񾋨b47<(1EI%kd,wj1W SpT!+..fRד U'P(6-0>`4 amtUde+/L(" òRjuBB[gOqT!WCH"~K(q"4OB1(fx-C`5mU4jSW8VңX\"w8ryZB_C-Pk R+oc<./X]ApRU]F)'-5b.=hAuWP[Q,o6cD Z\O~"+}\IZch{ oF%wT -/1b37{M8JoP}\"ܧq{j0hmR/f~.WK:M`vcD6usbMm,B*;$~cL9VtW!uR;qw%;+zC_;cXH+0=jG5Go֗ULڣ*@A`?޳΅KAz.*<w:9҄(WK!}칋"}C#-xH&?#/ ]AOC uG@\`:=ҲP?xy{1V͆!ZJm_M8h[uVEy.6HYj6VPܺ`xҤ8MQ> VCZQNJn ix#UΓVߩ]uakX 9mU 4VkVtK̵Rq̵ڲ7)n2썲Po1p'J)B$,GXbCEzkrY'(^,TpDz,ݮknB@8MrI{p!Nek8iUU&/LkX]4刻^_B.c1oVAJ|7/4oVb ݝ. />' J#){`rL5I&:9ʨ"x^~kǣ<ӹ&p Q_% Qj;qREuaHޜNeqym?߂plU,!TU,|0+.|!i:\t|/"Dk{)I'v1E1&BIJL|̈H) *'pI֊u,h@IEaܤ mnocm& |[ XH[aFv㏖l8ǨTÐ9Q(:TyQ `[Z <(ljۓ")ױVH{ɩRfFs,]$!s.R^YK$~{RsX(\zIX_.T/x&JZUIrBY1Vc aǴ j^JU8y OH?|1W7#ry&THSA4q!x!PS'aYA4aNQ`=8[Dp98FJU%_4n4b76?v1#H_(*Q)/Sda ]WRάDT ᫟qQ?RXx$Dc~3.g@!ٔD&,κ(Pc8Fڠ T/ SJ%FXs)%[إUGO}@ŎuϑJ L XЧ cX¹VO}x2IxZ00ci$f82kJ)ɾyRCO}9ﭑI1.JqЂĆ<ܲ;+ilcBwcbpV-xm;CbüYϴW/?";z4!1#J .,3PDT$ciw;D50D_ad*P8Ӽ{YrD7]YF+0]Hlɂ!-/Bz5'^Ռz֢x͡QidIgxw;X<ka٫+ؽXɼcFqĘBE!AQ$}>|RC[wm:rR.ZV zo_䌟̦ѯ[(i~Y-{wk 6i>ʦYP=lY= 2&r/SRgqjw8KUE4Kx>>dUm w!L[w~ 7k } @9 Ec"(/dd8RbE4V~UVeGc!GV%r8}VX{U޺$"ԔQt,:_"Y]Қ=gx=KU*蠍yٙ/,U5/Y0ei*=H'o yHI:E?Ӵ J> e~yV|;_Ra5ĘP['V="k6x2Ē0~$Ŋ6V?-%C 8Jv)~5ٹ\}uXWP=Z޼z{wCT^?*gٳ|ʉ_,%&15rmm_<D!R0S0[~?OhB1cN qrź{kpPS]5.^FgjCHa: UؗÃi :i&J3r>1 \z -KQ݁Rzl'LF;,u)(N-&ŞK۶vڽ^0~I\҄DnvI ㇩ Xlٰ]. E~q-yh&Qk;wĿLcUUZh8mLq펋_-=jrTƂ=(n\4Lcs)GIPb$ 4ʳ_K2'BTdC$5źM&f, JIpw}Q3*($Hg.=8"-`_ ~D.O0Z2τJg]kCa@cSJu^~v: LP(jY#Ug"2G.X޵m$Be7h!8qذ㼜4H"Qsb&)ԕT9 آ(uUjMZPL4(Sֱ;2۔!"RKW5xkCh\3uhpO6V GTaV  t+MBxd=!Fu<a\;>n w 2e٣acH)nڎzq"T1q!e@ z.(|fìS2JLYQVm<˖fuq#x, 绽l pXh"aC6.1dL@Dqp]gsڛ3)X9Fx1_>eFmG !MAɐkei=łp ww@Z(VvfYJKkGPz}{i'3_+ q9L w{by/?\>R=6 >ݽ=zS#;dcīv~)UǎM,󤺇鷝";ȟ'U3ɧ"z;Ɩa@brKC @UXyL"R )R`6Q́\ydcⷛ7 }fQN±YdϢb@b~u[&1*lP謲KOF0orRĴgS'`uBv?6 z46"K-K3umٷI+W@Ja;ZQPb<69W'wFVt;TL~S<ˋ`lz8_ҝ^w5@K!qXN[!)!T"B\8JJPm9;pZ}a±ÄS5 Q0wVKgW &(?txl:Z|qnR8vp-R <.$bkIpX kXALbkH=+kThT;іV(QlpDRYS>Ma_: .*^8(?N0!N0j\m&@$[=|mc ruf. 6@-ކ:ZĶazJ>IALz "} U @l۠Bf:AOT$5åcj&Ơ<"=<Gu=@׷Փg},Jׇ :R9k͓K&: ~31:7ºfd+9^8nTh=rUpt!*YkҤIŘ@Йp6kaj_fdkڸv$:qP<˽$ i̓ld)}"(l EXOW7A{͚hc#6%M.}đ ϥp05=ADI&b;Mc}2M.OGF¾/K$%vJ5\BBcW{ʏЍTA7uzm &\JաЊPT=g+d)@fqP PЁҞǏİd: o!!댟Rۥ1Z쥆YGíKRTilB't=h*]$N#݇KC*hEVpgACDhI销*WA!|00 6* a(m ^a)\V-d@rzO}!#sq\0TD)| a;,tA}JquprDsQcKpgC.8|~fg IItLMp(p_~nI58CnCJGʙձ 0mN)R )f~\DTVlɉBUJY8R܋X9@N[UhS}Jꠙ &Dę?ܙ^t煒ZUؚH>r [!u=j *[RsF \z&Pdy=Q0[9&$iou&Rt˚)a1 IhZDIHtzSu8M^<^>*5RLJŞѳ7B5%ΨND`ŜMs#bqp&u q4Ng1盢T?; "vhkK^"WFi:LxG?0ǸljI` a;>û荒L{KL1h=Ԁs0+.Z')oC,4R(mlJ.6Äu``3Q+f ciMJI0fʲ[nKF(C|b@ ٽ>3d<% l}Z?$Jf*+WY&__H7m^ Fh~LӮ${BDAnw~Z;샲ÝUfkd{4u$]m䞈;arf?ӓ)o!mwkvwM %&覹jz䤨] -s z,Ym(CHmb6JZyą_FIPs l_F)sy:NnҗDa5:yۤ@Z(X~$4НN69 k:bgJpz$4^[@;zudܷH% j7plӄk})+ι'46A%DEd'3hDM4q.@#ZE7&ºJ._$ Kw;%25yhd(5=)PO1ש7_(}- / !RH\Z`2sQb1{LR/b SDp^2ìЎsw" ŗHǰsyyIjv4i]|cٴ}IFOƶF@.t!$/n-sJ1nRt4".|4@B%f}ZZyqTL:_q#4À0Gf%5)QK:]j IӖ%8DE+]=;vӑo+!] #+c^ /=x(sr%ռX޻/Rnf1.%Q{z?8rƜqŀ :W*y2{, 4{hGJi(~j0$EjRbWf2DAl^,h#6F ¾_g9dj SbZJQmk&%`iZP`!eR؉kjcٸXL (El_uf d:`B:-{+ְ"xVp`'͇8@@]os/n7:5HFJAMGM({px863ރ[u 3O!cM pNE} $/tjQ:PJ'~B6ʎdI @l9ʱC7)_ABWeO*@W@r d\Ǐ Ă喭.ӧ&CT]mf.d-e]Lp,u9 ^@ȕbOwo|S - @Q*?đ$ź/yqvR3Y?}wEOj./ NdOw]1Ԅ-޾d7w~oB,"1eH~/$@G!G}NPȾ@>(.h>|c{}9}a} aeEj_(=@PSSndo.瓥 &;5,Cf>1ێR˯)"C>kiכV?R%&Lyw{";h0)q[ҁb^[ЁcXg>杚Zoq9v@ `7x/ysۧEx{Qn&Sȟ'?o& ~^:|5V~j&H}{:CI$-A:*^۩L,0PRuŭBFED~-qrtHk̜BwhNe~Nݦ :w;hU8Іv?;_u(bBkxk`k@"pmޑx$ <{,VR"X+D 3FtbZFyS_@'sϮkޗuwL Ú٬ 벹<~wWއ8|gWާtήA˞ӇBiF!άRk}vǞ#cvNl__ f[z"ω^_Woli!\38Bk ZM)1Y)ki'~%JNRW9 D;j\InɌԻH>ǤVx[N Qߚf>#=49m~rs.upEy9K4uYb %.T"WI8RbAZ8c]X%E˯II5 E/[BS7Px78(R=ꪀ)Z5MGy%(RQ+nX[Ŕ̔fWO44^)Z~$?3F@}&Qi+$sQ8]&v^0}G6r F?

cpvl"\q<;KE/2S`rY}f,ֺ=ޖ CZQ!SB4w $S-?ĕUTj!ԗ;/ٸl\$;@E3i͟WaOe#sSӂQ3ƐW eRdEG@11R$,'a UDp9jxفk2nĠJW6 7,íݦУ;0Μm&ᕰMCax}zzHeõi3x)~4F z kQ3-r{_pPfЗY9z³sUZۇ `EW ׫5H@CcsGCCU%J}roӝX+F1sJ4VVvpE}}zs\݋†VSC3%%hh~sS$cWcوFutjh%ݢ멁``d?5>|j(|==N xyabܺ܎RlXqhJ23%Sf_J12XsQ$Nd&s$L/1Ibڲ@0%fYB,$!ϙbJc*&GZ0beW[^޿6HY$({ə 2If"Rp!QawкyF$6lνVw4꿷TJnih_f_ԶmD$/Jܛ6HB؏UZO$]8>ejcM{gݖ.łێѤuiS/\z 8ֳ!!\DDn]ffhnC!dkr[|1}J"4?w&9>$䕋2$Z!3^]S3$tr}pPܠ|`wՆd {+1he-wlawVx@oB ?=PJ>J4ȳapIkzьl˫ﯼemfp(L֤V24_H#VҺmS!hl]p8'U!zEZC"i(H8`1;1aBti46xXSqCZA;FzJ@nіv"g  -d(/b o|Kˎwf&arpS0fNY =}7'gdz49[}fNg˱ϱ`w>{}@4 SɰO'2aUjl@P2KHJII}}ESD oܤ\,瓟7녚ĿeemT 2q 29yi,L0ͧi&x:YV^lS㬅Y^~>z/-.o[݉'Ix=Z I,&ӻE͋'zfgqz3{nTE_>UeSnXϠs&](QlG}|W"6wnھ^ }G)$dqSLT4 9~_ S 1. %D= 'j{9|rU.;U,Gm:,ˑ@6U;o˲Mf(4U?\:$VD|v6gksv6gkdKG0W"MsLXJe2 jp@>4#ȅb)Tf"Ftt]:JWM 6-j۟ue"'$,hQ9$L3 (b,F [ЄYedbr81- 4NLc AYUVB\ c3XB'P 3IJҥBJ1I Q)!q&3M  mMv<_;(.<yGYBa*.cMi16# ",jۚ_S|~- ~ ]]>~b=P9h~c"}8A.͉6۴TO6/N0*p^cgZv s?=w@DoOn/>gr'V\q- +PXkL5Z{~x aZXVl~ބ(FL0({1 t$ J"@]'YNu[ϢRrmၦn߲,N|(*Q8l׮<,MZq|.K_3pf|[9{jZ+Ɩv-7H֎Z,VP3cH@S^ZL1i)KPrC "D Ncf>5㜋X8CJÏ~ TQKBz0'1 f12#+pdK0nA (]ךKS"N *ImGjsaɩ,.iu||xsty* NɭӼc|a/c56H a7vәjSDJ;fu S  .bk}/xMXVnS]@ TȚӜskNJ)"Pٱ ΰRvf6mObD?;dՇ==!^z,s%d'i:|x4S0SxχBCm(LZ_Ŀ-hp($' =#aʓOt<ܭmX85nӗI$ێ&.ࡆ7!sj.tq%Kڔ+FD" = Ovu/O3Efy2UpA՟Wt,"nmȸn;&o޾i[}H"-H?Mg5L0,CITӨ"X^2޿6optQS} C;&PG$jV͋_b-Xm!r3Np;H "`@\Ծr.5\Yxynw~lQDh,yhĕu{陸6^{:y;{0+YC?'/I? $b% gC$\Njd>!3Kج ŀ_|h~svO%DW\mjQ<9x_¬+`"n[5ߘw5cbi2xG7#Vk_^PL)=yYe(?Ep"r҉%iLdu? , O$eDQ b F6|-PBo ]"ش|M;!@wmz~ߑ}!+yP͆] D p 6wϭ&h[5k݌ >{{9C,((ᘵ&׸paР3_#kysIBKD *$"CnpyZ(iˌP!@r%z'EйhU(&[TNizR74=hB<v\f1skԼ S}jA="DCʼinܥdI"[!e~uҙz9@FCm9j=޾v6d0ƫV@ٍöUCH K0!!6ޘzQ]Gh@001 ;L]szxpI$gmĘwF>$b;]|ܚAK! ^K OܿMumK4ҡEcD!z__9ף(/ _PkF(zrIgqt2̷cv6w^k+sۀ>}$β,YDL"8Jő ;p!euFC1-'v G r>bU%Nǖ/~&d#Ɗ`NdvfW킥iH}ngXS=űӨᨏӃAnD%\>4B|8ģaB2q(@:}`ABBi?X{`u1:HU:VCo! ʁ4% )ъBa!3BL8ӜD%9_ 5 x_+P)7i;d9WHߖ&7fWj=.⫪$&6uڬfݾ6n"(TB*S` "0%* Ѵ\QBQJ0$ћxx|C:xS GW[]}岷}<_4eREJ`BIbbLr]is#Ǒ+~[ۍ`Cᝑ+M:8@87Q>@8D2_VVf/=%OZXỲ+UOR6FzX&zu;]\Ś< bq*ts z:X"CYdEQ zҟ9zt|`>''ŕ|^O8#7WK?,7o0D׊17/擛hiF {(?@ZE,V,d}ik^!PhZPTȭ47|0M teo, ]PH"kgmLXnrI3:g[98ˠIKZem*yj5VaxU\qMHїe4o|=$ lϝЈ`.K- D*si#"B F6/1R_Mf*Ǯ"ՐI^) ,~97UԀ*y\^Zc2fPO}RWb!oɷ>4(C "2WB2SAo şBşwf˪Dtwf+-xc-B(bJͷGw7$J)e!WB|Rg F%ƢKjW/)0Sc_4)0 ^[#TۼZ4nYNXu~t*M7^L<0+s s1H.sɉP SI:tB;xUoŻt/N&!~Y pmދF 3zA?[F@1uߠ2- xaKnY]Ss-)Υ8/A[՛/ix |!`8@*cVv{`@ Wsd!ʸ)O~q(3ɱEw`]e`GP99_NcEiۿ"XUfJICV8@`T~I6SsE#FU H 8 H$ Y.Ry`%sDǢyCԮ@%(Oyu(*zé2p>5a n+ʏ0g%uֽCx(Ǭµ'Zuµ)!_*a sIE]g,#yB3*-Umo  k0" 9b_0YH[6 lkQ5isgkBOPzY^B:[5;_~YVTp"(ZLJJQO2> T)ulO@Eh_e.p*^NF;y0S˗,v:GU^JS.a)&ʹ]&4\t\w.('^6rL5CbGӅaJԕ7H4GU9'f1]AUaČqPVSJʯ%K_WBY8m{M~blE,BGX9kL (haYd #dPy"Qǽ*8M,CŐבE%޾,N4r!#9$v$5_v)K^i Qr`d,'#-XOjOF=l҃!JwR 9fFߚ{Ѳz?dE@7qnȖp֟i.ZjhhV9yꁤMu̴,Ef,Wy"N!Hv;JH*+3&&/G!9X nG np=5Q%\9hx`:pi=I^-VnU jdz׋%l VI^|q*eGYBYqj{u):9XO+TD:A؝8Z{+kI뼮SeMV.ZnD\QΑ>IX#%hGk!(vY']CB=R]EۄMPP]v9+el 0bv>Gvpr8/R{A@/puq[(^ܯ`t}˚P\fADx\ %,Ö9CQN42R\pf\SAnV!(yN㫒Mz{7%9ɗg $۰/N>yY+Ga:,ȵBp>8":hU{\ $wq8/1F fpF*wt'NtUBvb ,h/rPlќy&rFTȽ^pz“ZL;grzkSDZ$3ajsIxl?m5Arǥɰ`R!Tdif"!rǭ3rHpa!^c f+(G(!W:=3D0Ş㉫v~\Q1q_ A 8vDӐiy,wj4DP Y`2) ^_(l! H9P 8ۢ@ZבT~^+rIrim%k]34y9Cf3&<43yF-Hi{sGl(xb$-Dq˜$ X ȑdcd!G@=b(2vbCs>_eamb6 S9*R`V`09T88&f@ԜhfD0 ^!A {b> 2 1! 9:h8՚DXgTtֽ(um{WesI9we1PIN zeA,a!ɐ`D*,sGr(WJ,)(ş/dȢY}VJ%>O[525qԙNy!_6UVxڪIRie&7kz4AJ =Xg9\@ZI8{kRr-Ν_'v]rD)*Rv[g _#DRٵˌf0LoݡP]沮z7]/W㈿䥄 %_&RѺL> ×F%^%+,iG3Ηka=]['>RdG4RQNiw몜kޙʣ1NW7Q[+j%eݕ0K3e=}jLan UlbK;߶bKj*VvVrIyy(nZխT멅@i<0vR;zGn/@$?O}ʗ^:T'pcuTsپ WB5TGQ:\ la9bI;Y*@TUN4gCbCTcލ %2$=2ƚmdRB_EI2rQgY {-FYr+5/4pU w<n1@xjݝjƯ3y7Z1#A5Ȍ_2$ݝ2ldFW jٌP˕s8)4?pjk<$xh&AK$բJV+l5ъ%{I8[ y%LVs$7z#@(!n Gʟb9 rMBH̖rBr Kb%48L@Iay1q8` zz#۽^ tv[̡qݳ|\|v7Z]sL\?qu~t3/O_X=N(R,4*/u yWxEw=]LJ_|W(-~u޻ʝþ&yydp)ܖO3yq9f2?mSRKoeIӒ|&Ȧ޵#"e1ېk A:X`iL_zV yHGtf}[ṚsFЉsaW,V$͔QA}G֣ul[mBs[ELJjqni7Ũv˃ѩ*ڭO/BRGڭ6ڭ y"zLiĎ)mG'rC.opC8p2K>ݟTswEuZ9'g?s.9zlD9ge?s.9l.v]џO}bw ̈X>O,ysQby5Y.4kJ~섧z#u[(vT Q9(ўW9A>՜`V]'L?@e10@ iw&ȗ,e;*͐xw(! ⏋|irc*$Yl!dڮ`mľ[ȺM=k^Ц5}{oiU&+Kx"~K6JtMu7?hSC,GTݡUG+/VE jV@>Zy.$ZP594n^sw5Zs!zA6;עO c]]ݹ{ EME5|zF=|jT40fOo=֢}#lY{ݫ\CMn!XUgdw g3Ҁ%id~`!to =v;4UP!&&!۵|NZe߆v 7},ՓZeH{N k= eO \I旟.SZeN?'EќYM'TJ00Ϋ e?oʪjCCS?stNwKfjI7/~oy?;yskofSf7I>ټpHsolb,iw*jQ|E˸` WHzs>Y5|EPth Ky\q'qH' QQK JdDHr_KYTWJBl[̊(w;`Dt:Z[[cx!Xj"U$HO 3 ud\NֶC~dP NXjpENk斀r(HNC}tz,D<ӠY8ƨ ,5r3h ~jk%X zwkVXug).>;I Y,N6jNRӕkJiwES+]pV1?c -Q94(ZQOuiP;T!!_)7vk+.@GN Ι1x8Dlmn7Ujg4 U~?)]X=D)K{q}v1r_ޝtXn7՟!oWbӦ~=WO1:TKň{tqr 6>?;֐', FFK'ĄL0\@nFR`kZpyn?k]JN..76*'SgIZjxqפ'>je(#HŞS kҴ# z<>lQkbnU.&Uhth ySA K({4O  )@{C$ju:hg  ,IjNw;:)#eR&jZrQAn\ҙꇡ4:Jٔ547j*_|ҥ\J]=jw3.o;צ"ܴS#Tow:5Qv,MŷX..pL7)8-^#~\0 R'[4W)9si|='H iW3uqHX[<"g,͕hv ґ9^ѩƌ=$>/0g;=PfC up~D{O|h ;{k祾R$-!q\MQmqt(ୣ h>.<827ZPÈQaZ, vJL'q4)/O9O 0yF7&>/=sJϋ~''^*!`{8,lu㪛H>fd=ކֿ-W-?KLFӢe$]"ʏ Cca?5hFvtG; 3Nw*0Dz0֑#9 e2QiQ#RC" WA7#l:*R2 haԢh,cEBF3a*kdD},aٸSNuO c7&GEq9]ǂ!u{w:r -R`!91j0x9]*)<>j_^eHd2P-lž] ěiH97jKW5avCEjj9ip3h[]aF"9?5X 6Q;e˚DԠ.vqեL[]$1n+Zv}Ǩ֣֎.XG6GՁckUtϧ˔1Fv1[zB|t[J cWF4BPh(N+0jf'^׳Acx}jn9 %/BΜKupȂ_(^by0l{23]ζChς/'a7EÓ_$VV٣Žhxzy EOuZ)]MPYeA/7EA6xܺ/6J8fQR qj*cTk)xu(?`JDZ~fƖP? &w|K.1u's@RD27,gľgVcཌྷ:kB%E[̎CAaPLt6 .um 6ԍWiف`HJJ4̎1aV]Gy+vK\"it%iG(Bpٙy PWFP3:F YxgCﭦ=ͯx0IZukuEL BRy%Ҫ|;:G,RbI:_)@GBPA\YǬQY牬Qj)wSU\nX`aY6>xTt|j͔ϡ\G7LwI &LR0a r07:gU| )u@U* *ꠢf"Xͥ3F}Qu_~X9}קLi)+b|gM%I&eK,͊H{5\X{ <1N9[ĤpT۠*L!liZ\C4bc,O$oRFg'lp8 qe*,/jBҤ!!Rfٿ<+΃ڎ5z̹[;ǂmqF<: g<̂SÞ)@Wn(̏Je,k: I5d'g}}IK~oz`jLʖwkB%D( q{>>*@R;6}VF0Ո l{?!bT,ZcxaLͩt:JbLZEC2dye)1Zb%6놷kQny-5p&RJ˨ HKE BEd²NIE6b cc܇I:Y =8vHWQTt6F'؀+ !*n }a'[{WV=?u/'ghY,N'Y\^-G bU '9(&0$ ]TyKR[գfR}ǿ}{M~xI_xq}v1^˻F~ͦg'+ySgf˟6T=WO540Eݣ8)TŒMCP6z&Z:āB& pVVE?sEӍ;g{Ir'ٞr΅  h5PxK"2i[j kcsQEr.`aT*F_@*GeU<-T!C0NԐXΠ$`W̢9ʉD+mT ?ӾrTkJ,`Tݚ*<JB#b|$FRRSwr-%[JH)Rzn9jHUm>,=%EYGdkpK]p3ZʍkY(w9uQn?G *9I?gtɾ|+S#=.WTJ"›?N*jN.|w@DێAH9GpTQ`{*SԩF@Mo xT{*ی B^jfDPkL䙴WTPmlI0\7 m .q^n8#-=8t)"D>H~J&?˥M/_@',*sd8v(\抢&٫WH.!\ZݝNEp)\fl=N-8'^J1wޫ2H-H N*dX w7.?LA'OV :}H9}6o+x݆ vhE Bq%[=P`0? '-l 2{uJSJ] =Qr5tO$n5tOzLVC.u`4Ձq_;zLI0b>%ҀCH2YR0WZU 7!EKƎNu.k9k$pRk$E͘xznL vyAGIo~u%,']\NqQP5H%9]*8V9sJQK!j]e)nAY ŒVK_U|cotyov:.5#շRM4/u+ېL*>`3!k\\^(b4 W \HU;~x׎IrK8\p.AL7!{_Iꊢ5 )};/~lLk5:d-x$jt&e{44\7Q9 ]+ۨ*B:߬ڏ;D(aj.K$Id_%{ U)tIx@*HW @63C)p™BXJy+$ vHd8Ap&XwE-D|0GfZFF(wy/8)5XgKhI>9bZSl9^m $S}uAp4gZcj-;cΩCDTHs1c(JIޠn+ 3_|q`I~=LN"Tjt;G8̤q4:PD\=c%#uWjwp5M,{-OݎnQm^-[{v2e܏x^EU 7IϹ*?|z.-Y T_g`-ْVZw}uGx1G8g cC|ŤJOK~qXǤ0' =6k /8wT#!a7oW CD@o`d"29ʳ}9LUep>y1G8gȹOwOnin:]o-2w0bs]lrX·_} tVo~9كgn񿿍~ŧþ 'rs;c}ܬ9ŗ;| G8༫ S j zbX'(bp^S]=5_څu;Q阣~ypt;l9ʝ/+B  I[C1Nx0}q>\ZO=ˆlOk~ޯ yg RA<3hhF{ͅxju*|zhjk6]pN֯2wӵeSm{N6Tr6V7{Cul<<6w9* %}k8;['E3{ǐ Cz `hbzB̀U۷G>&sCye?u$aje^hr>&t0x׫X9)-4v4~Nc(6<ܜ]\t.C_LMVNr=.:x=սΦUK5X8Okj+؈jaOnP_.nSzG'\CK5ढ़Lq4[-]9.Q+~\.o./1J+/Wmyq̱ߎB~}n6+%Woڀljx)o+ʪsͺ߫h~Xu*RRK, +6x@=z7']\Mk\ m^@f$H흸rV `Lh P[ +^Qg8P0%D,gl1kM<1ӮиhƂ/-0I/DB)cq'Kkg-$9k΅6PQY'( * -9@Mf щX;|d,qh8ShÜКSg:bNjRH)f]N`QT",X/x qKv BάYfdfd`~̡րtk~df8ݯHGsLnʹwYsSi?"Xʟ퍻fDw׵{j~ie^!dB]jEV3MZSj>q{"XL>|ˆ,^ۖ% ìڍۑiKּ< չĄ< [*AdAk2^+R軯kFnA7wõ "ީS>NnQ_No=вY$8ر/[݊o"||6[v0y({[p-~Rä,R _Ƅ*1zbZCqdf`!g=VYCݝƣx!o97T}a?oh)zq2' c)9,: oodNgkgu#JBdP:&u`;3TKhF3N8#2qci2 x㵶$JE<`mL9h"- DQ xrb\΍c2hU_Cx.Ki  \4?1N"Ũf #_'Di@yS.5EiPy$D(oJk&! :?Hwja G/N@!dWj$h_m$R6&6~M(/IG6jXܥc:!b$ U*p|WExA/} J N#%yJ:̜+34R;j#͟BaiGx:7" @4HFi=U0-+-= TU\¯Cv`V`AZ50LAnn`p*FSHH}RZǵJ2u]֟~(4F4wG89dc^SeF&#|+`cn!H'4<$A3cB(ԚZz$)hAkE;|!/<q:\2;q:m媧ק )UD#G_veҤܽ8ɢ֔[uQfuQSџx.꯺/(Hq%U}&kIYQ[' jA5`ugU R?t¬\y\Be #*4ƕAOR68htLK4!*0ʨ-R)KT \+DR5cXlKtf񅮪Fb3ܮh8[͸V ^bւvX=<J:| 8-z\=2DHYސ8zF,Hr޿tS~>'j4myabt͝ RGx^u.kE9ÝRZ08 Tƣxā{~I D%qg$Lxkk!J#8rUq:h^Q%U5H"cSmbKo A>|^'Ny]|2+h޺1% `0i-%%Diy|5K 1&8=W\"\gU^,l't5Zm8hwH)V0iY9Z~;zLj9XA3(9:+' lǧv|AdhXc Z:V:(b.Y0O AGkRWǗ6@yvR5r3Ah`;b+I1BQ}nY5;r Xw:݀upyq]pv7jC:# =r4k\sVHgNX3M\[SC$n]|ފ>}6VmѧܔNύ$Dota'[qIGiêG~^\MIWwXt4PDrCU{E;s\O:6Ԥ:xLyKBU܏Ͷo192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022548 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022763 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022787 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022501 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.023108 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.025524 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.31806176 +0000 UTC m=+0.486840290,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.030749 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.319732275 +0000 UTC m=+0.488510805,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.037888 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.319751605 +0000 UTC m=+0.488530135,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.043647 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.319764465 +0000 UTC m=+0.488542995,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.053617 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.320076586 +0000 UTC m=+0.488855116,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.065349 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.320113696 +0000 UTC m=+0.488892226,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.073553 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.320126766 +0000 UTC m=+0.488905296,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.078814 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.321020199 +0000 UTC m=+0.489798729,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.079789 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.080318 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.321055739 +0000 UTC m=+0.489834269,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.084678 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.321068959 +0000 UTC m=+0.489847489,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.088518 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.32157449 +0000 UTC m=+0.490353020,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.090981 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.32159447 +0000 UTC m=+0.490373000,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.093266 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.32160933 +0000 UTC m=+0.490387860,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.099501 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.322657463 +0000 UTC m=+0.491435993,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.101467 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.322671653 +0000 UTC m=+0.491450183,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.107649 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.322681913 +0000 UTC m=+0.491460443,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.112062 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.323060844 +0000 UTC m=+0.491839404,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.118062 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.323114794 +0000 UTC m=+0.491893364,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.133375 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542ae0c0834 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.712033332 +0000 UTC m=+0.880811862,LastTimestamp:2026-01-20 09:08:10.712033332 +0000 UTC m=+0.880811862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.139998 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542aed618ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.725275819 +0000 UTC m=+0.894054399,LastTimestamp:2026-01-20 09:08:10.725275819 +0000 UTC m=+0.894054399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.144393 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542b0297a83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.747517571 +0000 UTC m=+0.916296131,LastTimestamp:2026-01-20 09:08:10.747517571 +0000 UTC m=+0.916296131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.148997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542b1a133d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.772141009 +0000 UTC m=+0.940919549,LastTimestamp:2026-01-20 09:08:10.772141009 +0000 UTC m=+0.940919549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.153855 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542b1f6b4d2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.777744594 +0000 UTC m=+0.946523134,LastTimestamp:2026-01-20 09:08:10.777744594 +0000 UTC m=+0.946523134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.158548 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542d0bebc22 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294170146 +0000 UTC m=+1.462948676,LastTimestamp:2026-01-20 09:08:11.294170146 +0000 UTC m=+1.462948676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.163957 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d0ca95d0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294946768 +0000 UTC m=+1.463725298,LastTimestamp:2026-01-20 09:08:11.294946768 +0000 UTC m=+1.463725298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.169101 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542d0ca827a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294941818 +0000 UTC m=+1.463720348,LastTimestamp:2026-01-20 09:08:11.294941818 +0000 UTC m=+1.463720348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.173943 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542d0ef81e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.297366504 +0000 UTC m=+1.466145034,LastTimestamp:2026-01-20 09:08:11.297366504 +0000 UTC m=+1.466145034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.189222 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542d158d5cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.304269263 +0000 UTC m=+1.473047793,LastTimestamp:2026-01-20 09:08:11.304269263 +0000 UTC m=+1.473047793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.193715 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d187e76a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.307353962 +0000 UTC m=+1.476132492,LastTimestamp:2026-01-20 09:08:11.307353962 +0000 UTC m=+1.476132492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.198544 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d1a2cfd5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.309117397 +0000 UTC m=+1.477895937,LastTimestamp:2026-01-20 09:08:11.309117397 +0000 UTC m=+1.477895937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.204097 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542d1eb98c6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.31388743 +0000 UTC m=+1.482665960,LastTimestamp:2026-01-20 09:08:11.31388743 +0000 UTC m=+1.482665960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.211997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542d21751b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.316752818 +0000 UTC m=+1.485531348,LastTimestamp:2026-01-20 09:08:11.316752818 +0000 UTC m=+1.485531348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.220473 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542d2225d17 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.317476631 +0000 UTC m=+1.486255161,LastTimestamp:2026-01-20 09:08:11.317476631 +0000 UTC m=+1.486255161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.229727 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542d2255413 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.317670931 +0000 UTC m=+1.486449471,LastTimestamp:2026-01-20 09:08:11.317670931 +0000 UTC m=+1.486449471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.234880 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3068e78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.600866936 +0000 UTC m=+1.769645496,LastTimestamp:2026-01-20 09:08:11.600866936 +0000 UTC m=+1.769645496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.247635 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3923b21 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.610020641 +0000 UTC m=+1.778799171,LastTimestamp:2026-01-20 09:08:11.610020641 +0000 UTC m=+1.778799171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.258614 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3a40098 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.611185304 +0000 UTC m=+1.779963874,LastTimestamp:2026-01-20 09:08:11.611185304 +0000 UTC m=+1.779963874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.263748 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fc0a6749 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.020549449 +0000 UTC m=+2.189327979,LastTimestamp:2026-01-20 09:08:12.020549449 +0000 UTC m=+2.189327979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.268303 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fc9d248e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.030166158 +0000 UTC m=+2.198944698,LastTimestamp:2026-01-20 09:08:12.030166158 +0000 UTC m=+2.198944698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.276949 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fcb2a8b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.031576243 +0000 UTC m=+2.200354773,LastTimestamp:2026-01-20 09:08:12.031576243 +0000 UTC m=+2.200354773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.282859 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654308e926d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.236474065 +0000 UTC m=+2.405252595,LastTimestamp:2026-01-20 09:08:12.236474065 +0000 UTC m=+2.405252595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.291378 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543091c2e93 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.239818387 +0000 UTC m=+2.408596917,LastTimestamp:2026-01-20 09:08:12.239818387 +0000 UTC m=+2.408596917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.298346 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6543093a1683 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.241778307 +0000 UTC m=+2.410556837,LastTimestamp:2026-01-20 09:08:12.241778307 +0000 UTC m=+2.410556837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.308991 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c654309a47151 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.248748369 +0000 UTC m=+2.417526899,LastTimestamp:2026-01-20 09:08:12.248748369 +0000 UTC m=+2.417526899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.327304 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6543118ea312 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.381537042 +0000 UTC m=+2.550315572,LastTimestamp:2026-01-20 09:08:12.381537042 +0000 UTC m=+2.550315572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.345330 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6543140ea7cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.423481292 +0000 UTC m=+2.592259822,LastTimestamp:2026-01-20 09:08:12.423481292 +0000 UTC m=+2.592259822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.355607 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431c33caac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.56013278 +0000 UTC m=+2.728911320,LastTimestamp:2026-01-20 09:08:12.56013278 +0000 UTC m=+2.728911320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.356671 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358453 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" exitCode=255 Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358525 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2"} Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358744 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359461 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359495 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359505 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.359779 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.360027 5115 scope.go:117] "RemoveContainer" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.361450 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c65431c8a9c11 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.565822481 +0000 UTC m=+2.734601011,LastTimestamp:2026-01-20 09:08:12.565822481 +0000 UTC m=+2.734601011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.366841 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65431caef60b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.568204811 +0000 UTC m=+2.736983341,LastTimestamp:2026-01-20 09:08:12.568204811 +0000 UTC m=+2.736983341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.384954 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431cc7574a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.56980257 +0000 UTC m=+2.738581100,LastTimestamp:2026-01-20 09:08:12.56980257 +0000 UTC m=+2.738581100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.398385 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431cf31efd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.572671741 +0000 UTC m=+2.741450271,LastTimestamp:2026-01-20 09:08:12.572671741 +0000 UTC m=+2.741450271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.404940 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431d01f48c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.573643916 +0000 UTC m=+2.742422446,LastTimestamp:2026-01-20 09:08:12.573643916 +0000 UTC m=+2.742422446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.411422 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c65431d9b950f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.583712015 +0000 UTC m=+2.752490545,LastTimestamp:2026-01-20 09:08:12.583712015 +0000 UTC m=+2.752490545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.438125 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431e024b8a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.590443402 +0000 UTC m=+2.759221922,LastTimestamp:2026-01-20 09:08:12.590443402 +0000 UTC m=+2.759221922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.463328 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431e386dc6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.59399111 +0000 UTC m=+2.762769640,LastTimestamp:2026-01-20 09:08:12.59399111 +0000 UTC m=+2.762769640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.472608 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65431e47c9f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.594997745 +0000 UTC m=+2.763776275,LastTimestamp:2026-01-20 09:08:12.594997745 +0000 UTC m=+2.763776275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.507877 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432977c57e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.78269171 +0000 UTC m=+2.951470240,LastTimestamp:2026-01-20 09:08:12.78269171 +0000 UTC m=+2.951470240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.514192 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432a474ea9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.796292777 +0000 UTC m=+2.965071307,LastTimestamp:2026-01-20 09:08:12.796292777 +0000 UTC m=+2.965071307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.540237 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432a67fef4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.79843506 +0000 UTC m=+2.967213590,LastTimestamp:2026-01-20 09:08:12.79843506 +0000 UTC m=+2.967213590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.607603 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432a9c949e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.801881246 +0000 UTC m=+2.970659766,LastTimestamp:2026-01-20 09:08:12.801881246 +0000 UTC m=+2.970659766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.620694 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432c86e54f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.834014543 +0000 UTC m=+3.002793073,LastTimestamp:2026-01-20 09:08:12.834014543 +0000 UTC m=+3.002793073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.631680 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432c9c5420 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.835419168 +0000 UTC m=+3.004197698,LastTimestamp:2026-01-20 09:08:12.835419168 +0000 UTC m=+3.004197698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.638769 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433ae06bc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.074762693 +0000 UTC m=+3.243541223,LastTimestamp:2026-01-20 09:08:13.074762693 +0000 UTC m=+3.243541223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.649819 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65433aec7c98 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.075553432 +0000 UTC m=+3.244331962,LastTimestamp:2026-01-20 09:08:13.075553432 +0000 UTC m=+3.244331962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.656728 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433b7ffc48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.085219912 +0000 UTC m=+3.253998442,LastTimestamp:2026-01-20 09:08:13.085219912 +0000 UTC m=+3.253998442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.668324 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433b95c056 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.086646358 +0000 UTC m=+3.255424878,LastTimestamp:2026-01-20 09:08:13.086646358 +0000 UTC m=+3.255424878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.674777 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65433b9d375c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.08713558 +0000 UTC m=+3.255914110,LastTimestamp:2026-01-20 09:08:13.08713558 +0000 UTC m=+3.255914110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.679759 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c654345551412 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.250180114 +0000 UTC m=+3.418958644,LastTimestamp:2026-01-20 09:08:13.250180114 +0000 UTC m=+3.418958644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.688960 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434b189713 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.346879251 +0000 UTC m=+3.515657781,LastTimestamp:2026-01-20 09:08:13.346879251 +0000 UTC m=+3.515657781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.694293 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c7d9f5a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.370277722 +0000 UTC m=+3.539056252,LastTimestamp:2026-01-20 09:08:13.370277722 +0000 UTC m=+3.539056252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.700233 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.705323 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65435549ab60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.517867872 +0000 UTC m=+3.686646402,LastTimestamp:2026-01-20 09:08:13.517867872 +0000 UTC m=+3.686646402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.709859 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65435739e698 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.550388888 +0000 UTC m=+3.719167408,LastTimestamp:2026-01-20 09:08:13.550388888 +0000 UTC m=+3.719167408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.714351 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.719215 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.724578 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65438321e948 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.287014216 +0000 UTC m=+4.455792756,LastTimestamp:2026-01-20 09:08:14.287014216 +0000 UTC m=+4.455792756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.729018 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65439559065e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.59261603 +0000 UTC m=+4.761394570,LastTimestamp:2026-01-20 09:08:14.59261603 +0000 UTC m=+4.761394570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.733310 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543961c4507 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.605411591 +0000 UTC m=+4.774190131,LastTimestamp:2026-01-20 09:08:14.605411591 +0000 UTC m=+4.774190131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.739607 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c654396315398 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.606791576 +0000 UTC m=+4.775570106,LastTimestamp:2026-01-20 09:08:14.606791576 +0000 UTC m=+4.775570106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.744997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a49a7abb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.848563899 +0000 UTC m=+5.017342449,LastTimestamp:2026-01-20 09:08:14.848563899 +0000 UTC m=+5.017342449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.749119 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a645870a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.876550922 +0000 UTC m=+5.045329462,LastTimestamp:2026-01-20 09:08:14.876550922 +0000 UTC m=+5.045329462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.754528 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a657f4e5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.877758693 +0000 UTC m=+5.046537233,LastTimestamp:2026-01-20 09:08:14.877758693 +0000 UTC m=+5.046537233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.759711 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b296d17a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.083204986 +0000 UTC m=+5.251983516,LastTimestamp:2026-01-20 09:08:15.083204986 +0000 UTC m=+5.251983516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.764481 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b35e64a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.096284324 +0000 UTC m=+5.265062854,LastTimestamp:2026-01-20 09:08:15.096284324 +0000 UTC m=+5.265062854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.770847 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b36e64c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.097332931 +0000 UTC m=+5.266111481,LastTimestamp:2026-01-20 09:08:15.097332931 +0000 UTC m=+5.266111481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.776540 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c2682d8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.348583819 +0000 UTC m=+5.517362349,LastTimestamp:2026-01-20 09:08:15.348583819 +0000 UTC m=+5.517362349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.781754 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c3420d5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.362862428 +0000 UTC m=+5.531640958,LastTimestamp:2026-01-20 09:08:15.362862428 +0000 UTC m=+5.531640958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.787023 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c3547c09 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.364070409 +0000 UTC m=+5.532848939,LastTimestamp:2026-01-20 09:08:15.364070409 +0000 UTC m=+5.532848939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.792611 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543cf5fafdc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.566131164 +0000 UTC m=+5.734909694,LastTimestamp:2026-01-20 09:08:15.566131164 +0000 UTC m=+5.734909694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.797825 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543d0071c80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.577103488 +0000 UTC m=+5.745882018,LastTimestamp:2026-01-20 09:08:15.577103488 +0000 UTC m=+5.745882018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.806549 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-controller-manager-crc.188c6544cbc4d31a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:19.800617754 +0000 UTC m=+9.969396284,LastTimestamp:2026-01-20 09:08:19.800617754 +0000 UTC m=+9.969396284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.811772 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6544cbc658cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:19.800717517 +0000 UTC m=+9.969496047,LastTimestamp:2026-01-20 09:08:19.800717517 +0000 UTC m=+9.969496047,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.817162 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c65463ccf0ca6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 20 09:08:31 crc kubenswrapper[5115]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:31 crc kubenswrapper[5115]: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,LastTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.824008 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65463cd06775 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,LastTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.831209 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65463ccf0ca6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c65463ccf0ca6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 20 09:08:31 crc kubenswrapper[5115]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:31 crc kubenswrapper[5115]: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,LastTimestamp:2026-01-20 09:08:25.999499043 +0000 UTC m=+16.168277573,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.838163 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65463cd06775\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65463cd06775 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,LastTimestamp:2026-01-20 09:08:25.999556224 +0000 UTC m=+16.168334754,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.843694 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-controller-manager-crc.188c65471fd4bd95 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:29.800881557 +0000 UTC m=+19.969660117,LastTimestamp:2026-01-20 09:08:29.800881557 +0000 UTC m=+19.969660117,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.849111 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c65471fd651cf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:29.800985039 +0000 UTC m=+19.969763599,LastTimestamp:2026-01-20 09:08:29.800985039 +0000 UTC m=+19.969763599,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.855077 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768a59e81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022530177 +0000 UTC m=+21.191308707,LastTimestamp:2026-01-20 09:08:31.022530177 +0000 UTC m=+21.191308707,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.858920 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768a631f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022567928 +0000 UTC m=+21.191346458,LastTimestamp:2026-01-20 09:08:31.022567928 +0000 UTC m=+21.191346458,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.862756 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768a972f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022781174 +0000 UTC m=+21.191559704,LastTimestamp:2026-01-20 09:08:31.022781174 +0000 UTC m=+21.191559704,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.866509 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768a9bbc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022799814 +0000 UTC m=+21.191578344,LastTimestamp:2026-01-20 09:08:31.022799814 +0000 UTC m=+21.191578344,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.872167 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768ad87a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.023048612 +0000 UTC m=+21.191827162,LastTimestamp:2026-01-20 09:08:31.023048612 +0000 UTC m=+21.191827162,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.876988 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768b02c92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.023221906 +0000 UTC m=+21.192000446,LastTimestamp:2026-01-20 09:08:31.023221906 +0000 UTC m=+21.192000446,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.881835 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65434c98db20\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:31.36119083 +0000 UTC m=+21.529969360,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.893369 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435e287e00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:31.684774634 +0000 UTC m=+21.853553164,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.902911 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435eb090da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:31.695080057 +0000 UTC m=+21.863858577,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.082760 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.363416 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.365383 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e"} Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.365666 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366290 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366363 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366387 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:32 crc kubenswrapper[5115]: E0120 09:08:32.367027 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:32 crc kubenswrapper[5115]: E0120 09:08:32.718497 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.082548 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.242338 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.242665 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244285 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244355 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.244831 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.255950 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.370616 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.371329 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373194 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" exitCode=255 Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373479 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373537 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e"} Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373612 5115 scope.go:117] "RemoveContainer" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373858 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375002 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375053 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375075 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375006 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375176 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375214 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376076 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376239 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.376600 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376913 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.385433 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:34 crc kubenswrapper[5115]: I0120 09:08:34.084983 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:34 crc kubenswrapper[5115]: I0120 09:08:34.379040 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.085741 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.975737 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.976054 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977425 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977536 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.978268 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.978721 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.979107 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.986291 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:35.979047718 +0000 UTC m=+26.147826288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.078588 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.807312 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.807634 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809141 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809156 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:36 crc kubenswrapper[5115]: E0120 09:08:36.809626 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.815512 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.081602 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.381448 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383045 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383120 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383139 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383177 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.391480 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:37 crc kubenswrapper[5115]: E0120 09:08:37.392151 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392552 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392564 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:37 crc kubenswrapper[5115]: E0120 09:08:37.392929 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:38 crc kubenswrapper[5115]: I0120 09:08:38.080445 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:39 crc kubenswrapper[5115]: I0120 09:08:39.086026 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:39 crc kubenswrapper[5115]: E0120 09:08:39.727437 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:40 crc kubenswrapper[5115]: I0120 09:08:40.084125 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.138849 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.265997 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.684474 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.989519 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:41 crc kubenswrapper[5115]: I0120 09:08:41.077595 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.084795 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.366791 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.367215 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.368492 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.369084 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.369347 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.370326 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.371043 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.371579 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.380603 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:42.371510742 +0000 UTC m=+32.540289322,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:43 crc kubenswrapper[5115]: I0120 09:08:43.085263 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:43 crc kubenswrapper[5115]: E0120 09:08:43.205236 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.084866 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.393128 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394673 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394689 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394723 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:44 crc kubenswrapper[5115]: E0120 09:08:44.411761 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:45 crc kubenswrapper[5115]: I0120 09:08:45.084690 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:46 crc kubenswrapper[5115]: I0120 09:08:46.082366 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:46 crc kubenswrapper[5115]: E0120 09:08:46.733048 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:47 crc kubenswrapper[5115]: I0120 09:08:47.083008 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:48 crc kubenswrapper[5115]: I0120 09:08:48.085834 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:49 crc kubenswrapper[5115]: I0120 09:08:49.082643 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:50 crc kubenswrapper[5115]: I0120 09:08:50.083714 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:50 crc kubenswrapper[5115]: E0120 09:08:50.267201 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.085562 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.412595 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413985 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.414028 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:51 crc kubenswrapper[5115]: E0120 09:08:51.423966 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:52 crc kubenswrapper[5115]: I0120 09:08:52.084654 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:53 crc kubenswrapper[5115]: I0120 09:08:53.081368 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:53 crc kubenswrapper[5115]: E0120 09:08:53.738432 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:54 crc kubenswrapper[5115]: I0120 09:08:54.081061 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.083758 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.216787 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218081 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.218641 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.219173 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.225485 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65434c98db20\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:55.221134156 +0000 UTC m=+45.389912686,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.433857 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435e287e00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:55.429202234 +0000 UTC m=+45.597980784,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.442990 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435eb090da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:55.44081004 +0000 UTC m=+45.609588570,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.451300 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.453298 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb"} Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.453747 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454458 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454515 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454528 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.454976 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.081296 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.239949 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.460565 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.462152 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464371 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" exitCode=255 Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464445 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb"} Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464494 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464759 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465724 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465756 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.466465 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.467055 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.467540 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.476989 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:56.467473698 +0000 UTC m=+46.636252268,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:57 crc kubenswrapper[5115]: I0120 09:08:57.082349 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:57 crc kubenswrapper[5115]: I0120 09:08:57.472987 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.050701 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.081840 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.424751 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426434 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426640 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426781 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426991 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.440186 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.811672 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.812032 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813367 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813382 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.813805 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:59 crc kubenswrapper[5115]: I0120 09:08:59.084620 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:00 crc kubenswrapper[5115]: I0120 09:09:00.082962 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:00 crc kubenswrapper[5115]: E0120 09:09:00.268120 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:00 crc kubenswrapper[5115]: E0120 09:09:00.745084 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:01 crc kubenswrapper[5115]: I0120 09:09:01.081980 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:02 crc kubenswrapper[5115]: I0120 09:09:02.083721 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:03 crc kubenswrapper[5115]: I0120 09:09:03.080648 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:04 crc kubenswrapper[5115]: I0120 09:09:04.082745 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.082332 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.441121 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442533 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442567 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454102 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454328 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454955 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.455096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.455174 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.455184 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.455704 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.456165 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.456563 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.461598 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:09:05.456523977 +0000 UTC m=+55.625302517,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.694467 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.975913 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.976528 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.977885 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.978027 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.978056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.978977 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.979520 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.979908 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.988224 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:09:05.979830559 +0000 UTC m=+56.148609119,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:09:06 crc kubenswrapper[5115]: I0120 09:09:06.082764 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:07 crc kubenswrapper[5115]: I0120 09:09:07.085019 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:07 crc kubenswrapper[5115]: E0120 09:09:07.678930 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:09:07 crc kubenswrapper[5115]: E0120 09:09:07.752073 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:08 crc kubenswrapper[5115]: I0120 09:09:08.081236 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:09 crc kubenswrapper[5115]: I0120 09:09:09.085610 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:10 crc kubenswrapper[5115]: I0120 09:09:10.082288 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:10 crc kubenswrapper[5115]: E0120 09:09:10.268933 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:11 crc kubenswrapper[5115]: I0120 09:09:11.082657 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.083046 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.455513 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457653 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457706 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457729 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457758 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:12 crc kubenswrapper[5115]: E0120 09:09:12.469233 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:09:13 crc kubenswrapper[5115]: I0120 09:09:13.082441 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:14 crc kubenswrapper[5115]: I0120 09:09:14.080340 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:14 crc kubenswrapper[5115]: E0120 09:09:14.758799 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.082289 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.642006 5115 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-6tc4b" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.648807 5115 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-6tc4b" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.739444 5115 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.990253 5115 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 09:09:16 crc kubenswrapper[5115]: I0120 09:09:16.650251 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-19 09:04:15 +0000 UTC" deadline="2026-02-11 10:22:01.71770075 +0000 UTC" Jan 20 09:09:16 crc kubenswrapper[5115]: I0120 09:09:16.650348 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="529h12m45.067359934s" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.470035 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471749 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471819 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.472071 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.485210 5115 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.485790 5115 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.485934 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489691 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489702 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489722 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489736 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.502803 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513786 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513838 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513851 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513867 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513878 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.522653 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530251 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530264 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530281 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530293 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.541163 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547520 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547583 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547598 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547621 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547635 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556548 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556710 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556748 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.656943 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.758131 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.858331 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.959066 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.059382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.160378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.261507 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.269963 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.361773 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.462931 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.564166 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.664500 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.764700 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.866079 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.967347 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.068602 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.168949 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.216987 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217853 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217882 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217914 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.218371 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.218627 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.269667 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.370077 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.470993 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.549509 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.551675 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b"} Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.551906 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552647 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.553087 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.571378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.672318 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.772755 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.873831 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.974695 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.075204 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.175378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.275786 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.375981 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.476111 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.556283 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.556761 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558670 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" exitCode=255 Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558750 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b"} Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558858 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559088 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559766 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.560199 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.560526 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.560813 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.576773 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.677130 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.778157 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.878533 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.979492 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.080726 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.181578 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.281811 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.382730 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.482996 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: I0120 09:09:23.563838 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.583458 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.683799 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.785233 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.885783 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.986605 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.087383 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.188382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.289198 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.389880 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.491041 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.592099 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.693094 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.794350 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.895216 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.995749 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.096466 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.196840 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.297637 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.398193 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.499009 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.600069 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.700487 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.801688 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.901839 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.975591 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.976003 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977240 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977305 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977327 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.978065 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.978524 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.978870 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.002647 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.103714 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.204658 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.305758 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.406713 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.507290 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.608125 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.708631 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.808769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.908920 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.009678 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.110530 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.211732 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.312791 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.413663 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.514107 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.614737 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.715058 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.815235 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.915450 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.016844 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.118004 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.218858 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.319642 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.420669 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.521110 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.621626 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.722046 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.823280 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.924120 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.024723 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.125032 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.225871 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.326501 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.426918 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.527667 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.628846 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.729138 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.829998 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.888764 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895567 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895576 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895628 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.911953 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926547 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926651 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926679 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926699 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.943766 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956244 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956399 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956430 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956466 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956491 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.971028 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.983952 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984047 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984063 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984113 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002271 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002459 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002499 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.103565 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.203769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.271210 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.304490 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.405560 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.505975 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.607176 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.707556 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.808142 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.908511 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.009117 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.110091 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.210316 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.311179 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.411705 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.511876 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.552603 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.553103 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554516 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554610 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554633 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.555525 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.556004 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.556429 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.612827 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.713769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.814771 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.915957 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.016561 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.117417 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.217946 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.318590 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.418816 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.519811 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.620596 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.720832 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.821459 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.922000 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.023000 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.123382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.224032 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.325204 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.425798 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.526533 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.626975 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.728031 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.828728 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.928958 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.029648 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.130807 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.230980 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.331586 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.431797 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.532392 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.633157 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.734201 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.834474 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.934676 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.035490 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.136398 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.237215 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.337865 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.438017 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.538655 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.639116 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.740276 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.841309 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.941866 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.042374 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.143564 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.243774 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.344319 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.444561 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.545328 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.646379 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.746876 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.847703 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.948279 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.048756 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.149963 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.250674 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.351256 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.452258 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.552879 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.640639 5115 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656414 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656490 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656527 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.693564 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.709605 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759093 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759147 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759178 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759192 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.809070 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862303 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862437 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862480 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.911535 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966374 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966392 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966427 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.013604 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069696 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069762 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.121302 5115 apiserver.go:52] "Watching apiserver" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.130252 5115 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.131914 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-pnd9p","openshift-image-registry/node-ca-5tt8v","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-xjql7","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-zvfcd","openshift-multus/multus-additional-cni-plugins-bmvv2","openshift-multus/network-metrics-daemon-tzrjx","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7","openshift-dns/node-resolver-bht7q","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.133502 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.134250 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.134325 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.134242 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.136621 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.136999 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.139293 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.143936 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.145642 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.146924 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.148240 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.149688 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.149839 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.149875 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.157822 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.157847 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.158494 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.158789 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159010 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159506 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159650 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159677 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159517 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.160492 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.161204 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.161564 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164039 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164911 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.168457 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.169136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171754 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171820 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.172104 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173065 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173122 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173109 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173142 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173157 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173132 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173387 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173636 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173784 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.174521 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.174524 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.174678 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.176003 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.176835 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.177195 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.179379 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181015 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181105 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181370 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181498 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.182002 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184442 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184952 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.185956 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.186810 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.188002 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.188306 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.194976 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195012 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195036 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195063 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195087 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195106 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195251 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195280 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195316 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195345 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195480 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195504 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195541 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195568 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195590 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195611 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195644 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195681 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195716 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.196373 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197057 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.196424 5115 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197164 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197310 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197363 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197512 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197584 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197631 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197690 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197797 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197843 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197933 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.198106 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199566 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.198826 5115 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.199631 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.699603288 +0000 UTC m=+88.868381818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199683 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199709 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199728 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199778 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199799 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.198776 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.199939 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200149 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.200254 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.700201594 +0000 UTC m=+88.868980164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200323 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200388 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200466 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.201698 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.214411 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.214503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.220045 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.220592 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222331 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222372 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222390 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222516 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.722487231 +0000 UTC m=+88.891265781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.231108 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.232492 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.232798 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.242385 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.254632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.264125 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.271515 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275567 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275657 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275670 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.281145 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.291374 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.299882 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301149 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301204 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301239 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301324 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301351 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301379 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301407 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301432 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301475 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301504 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301528 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301558 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301603 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301649 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301673 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301698 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301720 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301743 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301770 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301795 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301817 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301841 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301864 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301917 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301954 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301978 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302002 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302029 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302060 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302085 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302109 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302133 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302167 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302199 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302231 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302267 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302294 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302325 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302352 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302375 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302398 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302427 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302463 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302495 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302528 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302555 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302604 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302628 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302651 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302701 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302725 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302756 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302818 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302846 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302881 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302929 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302956 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302981 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303007 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303039 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303066 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303070 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303094 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303130 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303162 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303220 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303270 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303305 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303380 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303418 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303454 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303489 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303520 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303544 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303589 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303626 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303673 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303792 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303828 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303840 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303875 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303935 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303972 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304013 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304096 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304252 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304276 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304322 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304749 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304800 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304874 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304952 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304987 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304359 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305128 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305156 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305215 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305333 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305445 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305545 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305631 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306025 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306130 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306176 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306215 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306253 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306290 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306327 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306406 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306692 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306744 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306772 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306797 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306827 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306853 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306877 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306932 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306974 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307015 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307031 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307126 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307171 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307213 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307248 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307285 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307325 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307370 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307407 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307443 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307477 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307514 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307552 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307593 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307630 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307712 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307754 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307791 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308531 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308628 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308659 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308691 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308735 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308982 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307142 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307182 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307219 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307724 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307711 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307819 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307994 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308631 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308647 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308671 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309096 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308590 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308455 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.309498 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.809119763 +0000 UTC m=+88.977898303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311330 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311346 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311352 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309780 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309958 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309992 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310007 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310008 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311471 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310029 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310159 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310322 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310348 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310659 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310728 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310987 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311006 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311378 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311767 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311797 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311810 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311971 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312024 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312343 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312542 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313045 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313290 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313502 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313678 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313695 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313773 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314244 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314278 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314493 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314843 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315016 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315174 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315408 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315516 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315664 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316109 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316220 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316302 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309698 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316606 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316762 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316792 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317118 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317248 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317369 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317466 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317515 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317569 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317597 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317652 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317733 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317761 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317788 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317816 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317843 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317870 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317915 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317944 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317972 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317995 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318019 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318043 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318068 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318092 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318118 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318125 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318170 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318183 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318404 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318206 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318213 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318230 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318325 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318321 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318445 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318586 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318809 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319004 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318564 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319106 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319171 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319452 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319519 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319684 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319996 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320133 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320161 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320210 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320250 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320251 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320332 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320494 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320593 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320686 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320705 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320694 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320819 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320880 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320928 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320960 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320996 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321166 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321181 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321204 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321213 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321268 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321270 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321360 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321401 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321409 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321452 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321501 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321542 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321580 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321619 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321658 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321686 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321692 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321814 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321880 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321929 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321953 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321985 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321991 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322006 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322028 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322049 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322071 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322085 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322092 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322240 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322327 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322391 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322417 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322459 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322609 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322611 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322626 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322726 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322758 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322823 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322855 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322946 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323004 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323032 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323064 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323073 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323094 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323127 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323261 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323300 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323343 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323382 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323092 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324707 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323189 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323241 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323481 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323546 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323665 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323808 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323889 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323955 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324121 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324359 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324452 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324679 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324693 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324834 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324857 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324876 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325235 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325525 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325527 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327351 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327355 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327391 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327584 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327596 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325688 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325691 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327637 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326003 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326166 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326338 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327677 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326462 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326469 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326615 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326740 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326813 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327154 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326558 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327206 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327713 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327845 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327876 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327909 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327923 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327944 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328128 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328168 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328204 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328150 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328282 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328340 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328373 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328399 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328430 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328469 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328538 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328570 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329103 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329212 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329212 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329247 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328505 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328608 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328669 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328748 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329102 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329184 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329446 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330210 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330236 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330291 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330308 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330274 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330324 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330389 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330488 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330533 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330655 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330703 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330739 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330768 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331080 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331309 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331402 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331705 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331868 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332132 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332149 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332218 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332305 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332315 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332433 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332490 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332526 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332557 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332581 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332624 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332654 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333057 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333301 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333413 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333439 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.333521 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.333600 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.833579749 +0000 UTC m=+89.002358279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333632 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.334299 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.335936 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.336279 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.337996 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.338986 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339108 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339146 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339177 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339283 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339337 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339384 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339414 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339439 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339483 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339505 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339533 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339556 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339582 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339605 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339629 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339654 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339823 5115 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339842 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339856 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339870 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339883 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339915 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339931 5115 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339945 5115 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339961 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339976 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339990 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340003 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340016 5115 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340030 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340044 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340059 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340072 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340088 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340104 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340120 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340133 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340147 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340163 5115 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340176 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340189 5115 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340205 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340217 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340230 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340243 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340256 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340268 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340281 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340293 5115 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340307 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340319 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340334 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340347 5115 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340360 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340373 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340386 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340401 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340415 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340431 5115 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340444 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340457 5115 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340470 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340483 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340501 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340515 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340567 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340594 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340609 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340624 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340639 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340653 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340669 5115 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340684 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340700 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.346950 5115 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347023 5115 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347043 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347101 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347117 5115 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347132 5115 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347191 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347227 5115 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347241 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347256 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347270 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347284 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347303 5115 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347315 5115 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347331 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347345 5115 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347357 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347373 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347386 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347400 5115 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347413 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347427 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347441 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347456 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347471 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347486 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347500 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347665 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347690 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347708 5115 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347727 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347746 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347762 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347778 5115 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347796 5115 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347814 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347830 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347847 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347864 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347882 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347958 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347978 5115 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347994 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348011 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348029 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348049 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348047 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348066 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341546 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348088 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340737 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348107 5115 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341426 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348124 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348145 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348162 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348180 5115 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348199 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348217 5115 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348234 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348252 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348276 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348295 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348313 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348332 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348350 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348369 5115 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348389 5115 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348406 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348422 5115 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348440 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348595 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348610 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348623 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348636 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348649 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348662 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348675 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348688 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348702 5115 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348758 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348772 5115 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348815 5115 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348832 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348846 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348860 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348873 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348888 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348918 5115 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348931 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348948 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.342006 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341658 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341629 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341539 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348967 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349042 5115 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341642 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349076 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349092 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349110 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349125 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349129 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349140 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349172 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349187 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349202 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349218 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349233 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349256 5115 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349267 5115 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349278 5115 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349288 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349299 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349309 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349351 5115 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349362 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349372 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349383 5115 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349393 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349403 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349414 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349425 5115 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349435 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349446 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349456 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349465 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349808 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.350733 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.350817 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351078 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351019 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351187 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351520 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351546 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351562 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351646 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.851620421 +0000 UTC m=+89.020398951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352188 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352503 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352657 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.353098 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.353120 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354208 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354567 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.355555 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356001 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356248 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356381 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356385 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356871 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357029 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357045 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357583 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357639 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357671 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357772 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357943 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358109 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358434 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358482 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358554 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358823 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359020 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359076 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359423 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359475 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359587 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359715 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359802 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359820 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.360514 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.360933 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362244 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362298 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362572 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.365492 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.366072 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.375596 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.376650 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379623 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379673 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.385214 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.395203 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.402924 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.405620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.406697 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.415827 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.428768 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.440081 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450511 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450556 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450603 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450796 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450863 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450972 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451060 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451111 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451158 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451360 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451391 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451620 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451673 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451782 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452020 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452074 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452040 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452122 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452126 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452209 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452211 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452243 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452314 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452351 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452286 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452318 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452364 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452429 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452397 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452460 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452493 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452494 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452548 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452575 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452576 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452602 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452641 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452715 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452752 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452952 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452975 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452996 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452989 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453020 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453059 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453064 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453091 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453150 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453194 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453522 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453847 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453878 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453943 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453964 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453982 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454001 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454018 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454036 5115 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454053 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454072 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454096 5115 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454118 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454137 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454156 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454177 5115 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454195 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454213 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454231 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454247 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454265 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454282 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454303 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454320 5115 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454337 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454353 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454369 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454385 5115 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454402 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454418 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454608 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454628 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454645 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454662 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454681 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454700 5115 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454717 5115 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454739 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454758 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454778 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454799 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454817 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454835 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454552 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.457361 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.466837 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.473982 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.474081 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.475360 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.477847 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.477793 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.482071 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484192 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484263 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484288 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484308 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.484803 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: source /etc/kubernetes/apiserver-url.env Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.486045 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.487819 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f WatchSource:0}: Error finding container 89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f: Status 404 returned error can't find the container with id 89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.492332 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.494549 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.495306 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 20 09:09:38 crc kubenswrapper[5115]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 20 09:09:38 crc kubenswrapper[5115]: ho_enable="--enable-hybrid-overlay" Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 20 09:09:38 crc kubenswrapper[5115]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 20 09:09:38 crc kubenswrapper[5115]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-host=127.0.0.1 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-port=9743 \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ho_enable} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-approver \ Jan 20 09:09:38 crc kubenswrapper[5115]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --wait-for-kubernetes-api=200s \ Jan 20 09:09:38 crc kubenswrapper[5115]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.496219 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.499800 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.501293 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.504474 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-webhook \ Jan 20 09:09:38 crc kubenswrapper[5115]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.506138 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.512269 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc89765b_3b00_4f86_ae67_a5088c182918.slice/crio-29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c WatchSource:0}: Error finding container 29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c: Status 404 returned error can't find the container with id 29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.513963 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.515245 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41177fd_db48_43c1_9a8d_69cad41d3fab.slice/crio-659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2 WatchSource:0}: Error finding container 659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2: Status 404 returned error can't find the container with id 659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2 Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.515317 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.516582 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 20 09:09:38 crc kubenswrapper[5115]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 20 09:09:38 crc kubenswrapper[5115]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zmmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xjql7_openshift-multus(f41177fd-db48-43c1-9a8d-69cad41d3fab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.517320 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.518565 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xjql7" podUID="f41177fd-db48-43c1-9a8d-69cad41d3fab" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.518712 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.522928 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: while [ true ]; Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: for f in $(ls /tmp/serviceca); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $f Jan 20 09:09:38 crc kubenswrapper[5115]: ca_file_path="/tmp/serviceca/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ -e "${reg_dir_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: mkdir $reg_dir_path Jan 20 09:09:38 crc kubenswrapper[5115]: cp $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: for d in $(ls /etc/docker/certs.d); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $d Jan 20 09:09:38 crc kubenswrapper[5115]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_conf_path="/tmp/serviceca/${dp}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ ! -e "${reg_conf_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: rm -rf /etc/docker/certs.d/$d Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait ${!} Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-5tt8v_openshift-image-registry(92f344d4-34bc-4412-83c9-6b7beb45db64): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.524091 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-5tt8v" podUID="92f344d4-34bc-4412-83c9-6b7beb45db64" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.527411 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.529942 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.538420 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.542333 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650d165f_75fb_4a16_a8fa_d8366b5f6eea.slice/crio-e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153 WatchSource:0}: Error finding container e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153: Status 404 returned error can't find the container with id e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153 Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.544346 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -uo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: HOSTS_FILE="/etc/hosts" Jan 20 09:09:38 crc kubenswrapper[5115]: TEMP_FILE="/tmp/hosts.tmp" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Make a temporary file with the old hosts file's attributes. Jan 20 09:09:38 crc kubenswrapper[5115]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Failed to preserve hosts file. Exiting." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: while true; do Jan 20 09:09:38 crc kubenswrapper[5115]: declare -A svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${services[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: # Fetch service IP from cluster dns if present. We make several tries Jan 20 09:09:38 crc kubenswrapper[5115]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 20 09:09:38 crc kubenswrapper[5115]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 20 09:09:38 crc kubenswrapper[5115]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 20 09:09:38 crc kubenswrapper[5115]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 20 09:09:38 crc kubenswrapper[5115]: for i in ${!cmds[*]} Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: ips=($(eval "${cmds[i]}")) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: svc_ips["${svc}"]="${ips[@]}" Jan 20 09:09:38 crc kubenswrapper[5115]: break Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Update /etc/hosts only if we get valid service IPs Jan 20 09:09:38 crc kubenswrapper[5115]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 20 09:09:38 crc kubenswrapper[5115]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 20 09:09:38 crc kubenswrapper[5115]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Append resolver entries for services Jan 20 09:09:38 crc kubenswrapper[5115]: rc=0 Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${!svc_ips[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: for ip in ${svc_ips[${svc}]}; do Jan 20 09:09:38 crc kubenswrapper[5115]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ $rc -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 20 09:09:38 crc kubenswrapper[5115]: # Replace /etc/hosts with our modified version if needed Jan 20 09:09:38 crc kubenswrapper[5115]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 20 09:09:38 crc kubenswrapper[5115]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: unset svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p9bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bht7q_openshift-dns(650d165f-75fb-4a16-a8fa-d8366b5f6eea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.544364 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.546150 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bht7q" podUID="650d165f-75fb-4a16-a8fa-d8366b5f6eea" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.554460 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b42cc5a_50db_4588_8149_e758f33704ef.slice/crio-ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d WatchSource:0}: Error finding container ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d: Status 404 returned error can't find the container with id ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.555621 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.557077 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h55j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bmvv2_openshift-multus(4b42cc5a-50db-4588-8149-e758f33704ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.558167 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podUID="4b42cc5a-50db-4588-8149-e758f33704ef" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.558802 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.562929 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.571131 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 20 09:09:38 crc kubenswrapper[5115]: apiVersion: v1 Jan 20 09:09:38 crc kubenswrapper[5115]: clusters: Jan 20 09:09:38 crc kubenswrapper[5115]: - cluster: Jan 20 09:09:38 crc kubenswrapper[5115]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: server: https://api-int.crc.testing:6443 Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: contexts: Jan 20 09:09:38 crc kubenswrapper[5115]: - context: Jan 20 09:09:38 crc kubenswrapper[5115]: cluster: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: namespace: default Jan 20 09:09:38 crc kubenswrapper[5115]: user: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: current-context: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: kind: Config Jan 20 09:09:38 crc kubenswrapper[5115]: preferences: {} Jan 20 09:09:38 crc kubenswrapper[5115]: users: Jan 20 09:09:38 crc kubenswrapper[5115]: - name: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: user: Jan 20 09:09:38 crc kubenswrapper[5115]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: EOF Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9kn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pnd9p_openshift-ovn-kubernetes(0b51ef97-33e0-4889-bd54-ac4be09c39e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.572397 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podUID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.575306 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.579039 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5976ec5f_b09c_4f83_802d_6042842fd8e6.slice/crio-25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf WatchSource:0}: Error finding container 25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf: Status 404 returned error can't find the container with id 25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.581591 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:38 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:38 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:38 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:38 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: } Jan 20 09:09:38 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:38 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:38 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.584237 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:38 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:38 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.586093 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593117 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593672 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593692 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593716 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593737 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.610297 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.612657 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.612774 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 20 09:09:38 crc kubenswrapper[5115]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 20 09:09:38 crc kubenswrapper[5115]: ho_enable="--enable-hybrid-overlay" Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 20 09:09:38 crc kubenswrapper[5115]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 20 09:09:38 crc kubenswrapper[5115]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-host=127.0.0.1 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-port=9743 \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ho_enable} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-approver \ Jan 20 09:09:38 crc kubenswrapper[5115]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --wait-for-kubernetes-api=200s \ Jan 20 09:09:38 crc kubenswrapper[5115]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.613692 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"6d71e67b9b21d106693ff03a675acf8a5db31180ddb0ad6b25c400a878cf62f5"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.616125 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 20 09:09:38 crc kubenswrapper[5115]: apiVersion: v1 Jan 20 09:09:38 crc kubenswrapper[5115]: clusters: Jan 20 09:09:38 crc kubenswrapper[5115]: - cluster: Jan 20 09:09:38 crc kubenswrapper[5115]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: server: https://api-int.crc.testing:6443 Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: contexts: Jan 20 09:09:38 crc kubenswrapper[5115]: - context: Jan 20 09:09:38 crc kubenswrapper[5115]: cluster: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: namespace: default Jan 20 09:09:38 crc kubenswrapper[5115]: user: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: current-context: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: kind: Config Jan 20 09:09:38 crc kubenswrapper[5115]: preferences: {} Jan 20 09:09:38 crc kubenswrapper[5115]: users: Jan 20 09:09:38 crc kubenswrapper[5115]: - name: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: user: Jan 20 09:09:38 crc kubenswrapper[5115]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: EOF Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9kn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pnd9p_openshift-ovn-kubernetes(0b51ef97-33e0-4889-bd54-ac4be09c39e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.616378 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"a218a6ae2be1beddf47aeeeff4e3067dfd815b4aa565a272744c67c1c9c4e7f9"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.616494 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-webhook \ Jan 20 09:09:38 crc kubenswrapper[5115]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617299 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podUID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617682 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617824 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.618706 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.619369 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.619668 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h55j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bmvv2_openshift-multus(4b42cc5a-50db-4588-8149-e758f33704ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.620026 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bht7q" event={"ID":"650d165f-75fb-4a16-a8fa-d8366b5f6eea","Type":"ContainerStarted","Data":"e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.620323 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:38 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:38 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:38 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:38 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: } Jan 20 09:09:38 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:38 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:38 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.620824 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podUID="4b42cc5a-50db-4588-8149-e758f33704ef" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.621697 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tt8v" event={"ID":"92f344d4-34bc-4412-83c9-6b7beb45db64","Type":"ContainerStarted","Data":"761c02e36f1798649923e6cbb82508db2e808bd879684a78aa4fa13cfd46c504"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.622192 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -uo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: HOSTS_FILE="/etc/hosts" Jan 20 09:09:38 crc kubenswrapper[5115]: TEMP_FILE="/tmp/hosts.tmp" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Make a temporary file with the old hosts file's attributes. Jan 20 09:09:38 crc kubenswrapper[5115]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Failed to preserve hosts file. Exiting." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: while true; do Jan 20 09:09:38 crc kubenswrapper[5115]: declare -A svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${services[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: # Fetch service IP from cluster dns if present. We make several tries Jan 20 09:09:38 crc kubenswrapper[5115]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 20 09:09:38 crc kubenswrapper[5115]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 20 09:09:38 crc kubenswrapper[5115]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 20 09:09:38 crc kubenswrapper[5115]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 20 09:09:38 crc kubenswrapper[5115]: for i in ${!cmds[*]} Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: ips=($(eval "${cmds[i]}")) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: svc_ips["${svc}"]="${ips[@]}" Jan 20 09:09:38 crc kubenswrapper[5115]: break Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Update /etc/hosts only if we get valid service IPs Jan 20 09:09:38 crc kubenswrapper[5115]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 20 09:09:38 crc kubenswrapper[5115]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 20 09:09:38 crc kubenswrapper[5115]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Append resolver entries for services Jan 20 09:09:38 crc kubenswrapper[5115]: rc=0 Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${!svc_ips[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: for ip in ${svc_ips[${svc}]}; do Jan 20 09:09:38 crc kubenswrapper[5115]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ $rc -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 20 09:09:38 crc kubenswrapper[5115]: # Replace /etc/hosts with our modified version if needed Jan 20 09:09:38 crc kubenswrapper[5115]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 20 09:09:38 crc kubenswrapper[5115]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: unset svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p9bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bht7q_openshift-dns(650d165f-75fb-4a16-a8fa-d8366b5f6eea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.622374 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:38 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:38 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.622924 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjql7" event={"ID":"f41177fd-db48-43c1-9a8d-69cad41d3fab","Type":"ContainerStarted","Data":"659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.623168 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623327 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bht7q" podUID="650d165f-75fb-4a16-a8fa-d8366b5f6eea" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.623717 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c7548b475343c320509e713d055f6b58242bf38e80dabe0f83bc5f5b246e5948"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623792 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: while [ true ]; Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: for f in $(ls /tmp/serviceca); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $f Jan 20 09:09:38 crc kubenswrapper[5115]: ca_file_path="/tmp/serviceca/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ -e "${reg_dir_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: mkdir $reg_dir_path Jan 20 09:09:38 crc kubenswrapper[5115]: cp $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: for d in $(ls /etc/docker/certs.d); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $d Jan 20 09:09:38 crc kubenswrapper[5115]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_conf_path="/tmp/serviceca/${dp}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ ! -e "${reg_conf_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: rm -rf /etc/docker/certs.d/$d Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait ${!} Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-5tt8v_openshift-image-registry(92f344d4-34bc-4412-83c9-6b7beb45db64): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623884 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.624875 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-5tt8v" podUID="92f344d4-34bc-4412-83c9-6b7beb45db64" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.624924 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.624941 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: source /etc/kubernetes/apiserver-url.env Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.625298 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 20 09:09:38 crc kubenswrapper[5115]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 20 09:09:38 crc kubenswrapper[5115]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zmmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xjql7_openshift-multus(f41177fd-db48-43c1-9a8d-69cad41d3fab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626091 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626464 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xjql7" podUID="f41177fd-db48-43c1-9a8d-69cad41d3fab" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626512 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.629036 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.630789 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.635769 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.643292 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.660953 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.676519 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.687161 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696324 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696373 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696388 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696409 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696424 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.697672 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.706218 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.729270 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.759090 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.759223 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.759445 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.759422861 +0000 UTC m=+89.928201391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.759715 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760332 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760394 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760418 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760948 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.7608841 +0000 UTC m=+89.929662670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.761470 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.761888 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.762014 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.7619906 +0000 UTC m=+89.930769170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.772524 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798452 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798487 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.813632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.854688 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862523 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862680 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862771 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.862831 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.862793401 +0000 UTC m=+90.031571931 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.862981 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863032 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863074 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863101 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863115 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.863081929 +0000 UTC m=+90.031860499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863197 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.863171701 +0000 UTC m=+90.031950361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.893049 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901487 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901584 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901650 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901678 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.938162 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.975503 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007226 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007265 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007291 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.016568 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.051841 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.096604 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.109995 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110079 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110142 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110170 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.130545 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.172128 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.212244 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213453 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213524 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213551 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.256474 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.298173 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320128 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320243 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320395 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320420 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.336242 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.372655 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.411260 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423453 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423582 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423616 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423637 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.455840 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.494968 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526054 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526132 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526182 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526201 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.535046 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.572139 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.626259 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630913 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630944 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630955 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630978 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.654230 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.694304 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.730340 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733396 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733460 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733497 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733512 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.770969 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773783 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773937 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773991 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774200 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774236 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774255 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774335 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774311628 +0000 UTC m=+91.943090198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774404 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774446 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774434531 +0000 UTC m=+91.943213101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774516 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774556 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774545304 +0000 UTC m=+91.943323864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.821368 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845707 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845734 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845761 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845782 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.854062 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875198 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875447 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875411747 +0000 UTC m=+92.044190287 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875525 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875713 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875733 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875825 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875800258 +0000 UTC m=+92.044578818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875873 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875889 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875923 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875972 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875962702 +0000 UTC m=+92.044741242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.894504 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949484 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949553 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949571 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949610 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053543 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053629 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094445 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094538 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094599 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094621 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.110452 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115369 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115428 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115446 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115491 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.130383 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141926 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.142005 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.159855 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165163 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165191 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165224 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165247 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.181607 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186138 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186192 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186216 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.202890 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.203242 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205276 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205337 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205360 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.216368 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.216567 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.216598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.216950 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.217036 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.217083 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.217346 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.219302 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.224565 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.227055 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.230077 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.234868 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.235634 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.243229 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.248267 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.252157 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.253269 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.254444 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.257216 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.262032 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.264657 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.267005 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.269445 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.269500 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.272228 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.274060 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.275526 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.277072 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.282705 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.283120 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.290332 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.292976 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.295989 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.299408 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.301827 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.304281 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.306818 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307842 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307934 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307960 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307988 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.308008 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.310260 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.313997 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.314173 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.319343 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.326706 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.329743 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.332014 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.334074 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.335348 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.336722 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.338023 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.338866 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.339920 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.340613 5115 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.340719 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.344357 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.345559 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.347052 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.347169 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.348513 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.349223 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.350603 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.351290 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.351801 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.352985 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.354027 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.355281 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.356368 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.357050 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.357730 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.359051 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.360318 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.361624 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.363792 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.364228 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.365231 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.367034 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.373368 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.401210 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411479 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411544 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411563 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411591 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411611 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.421321 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.435454 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.449135 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.475512 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.496190 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514637 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514700 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514718 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514761 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.538124 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.576861 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.612943 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618135 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618154 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618178 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618196 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.652449 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721486 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721569 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721597 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721653 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824064 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824144 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824171 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824224 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926694 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926714 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926737 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926749 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030095 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030168 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030233 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132931 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132967 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132991 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236381 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236471 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236493 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236542 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339105 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339148 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442347 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442448 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442539 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546111 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546210 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546280 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546331 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648417 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648483 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648505 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752080 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752242 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799533 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799629 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799712 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799742 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.799715105 +0000 UTC m=+95.968493665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799629 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799800 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.799775437 +0000 UTC m=+95.968554007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799857 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799955 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799985 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.800074 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.800047764 +0000 UTC m=+95.968826334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855682 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855800 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855828 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855845 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901410 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.90139123 +0000 UTC m=+96.070169760 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901530 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901619 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901759 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901822 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.901813031 +0000 UTC m=+96.070591561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902185 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902199 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902210 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902252 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.902243273 +0000 UTC m=+96.071021803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958652 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958670 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958682 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061248 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061262 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061270 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164825 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164849 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164868 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222259 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.222421 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222563 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.222833 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222611 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.223072 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.223205 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.254477 5115 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268179 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268241 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268260 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268289 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268309 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371678 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371749 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371766 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371807 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474424 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474783 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474853 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474943 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.475019 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578429 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578484 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578495 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578514 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578528 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681339 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681366 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784262 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784625 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784730 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784838 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784972 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888123 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888207 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888248 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991510 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991602 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991631 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991691 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094300 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094393 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094415 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094446 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094466 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197664 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197684 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301423 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301546 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301772 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405397 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405489 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405573 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405645 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508542 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508651 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508716 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508742 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.611812 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612315 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612595 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.715402 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.715854 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716032 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716239 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716430 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819746 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819821 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819841 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819868 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819920 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922862 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922878 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922912 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922924 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025176 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025244 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025253 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128629 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128656 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128680 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.216888 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.216937 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.217161 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.217943 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.218003 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.217227 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.218121 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.217833 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232455 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232580 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232629 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232684 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335382 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335497 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335533 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335556 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438762 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438819 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438840 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.542513 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.542928 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543045 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543163 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543257 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646648 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646721 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646746 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646779 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646804 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751558 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751596 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751656 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855046 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855150 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855179 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855239 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958521 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958576 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958588 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958623 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061409 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061485 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061502 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061530 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061547 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163685 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163698 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163718 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163731 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265889 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265947 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265972 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265981 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368519 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368578 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368588 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368630 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471527 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471639 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471659 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574542 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574673 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574692 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574706 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677469 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677626 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677648 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780429 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780560 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780580 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780607 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780624 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.852849 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.853030 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.853127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853154 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853306 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853348 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853312655 +0000 UTC m=+104.022091225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853360 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853391 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853519 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853468809 +0000 UTC m=+104.022247379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853319 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853634 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853613123 +0000 UTC m=+104.022391703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883548 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883680 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883701 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.954789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.955111 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.955176 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955301 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955215146 +0000 UTC m=+104.123993716 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955525 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955593 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955607 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955705 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955715 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955685019 +0000 UTC m=+104.124463549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955968 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955865394 +0000 UTC m=+104.124643964 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986753 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986844 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986856 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986876 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986907 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089729 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089837 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192727 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192782 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192795 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192812 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192824 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216448 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216492 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216522 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216645 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216781 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216876 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.217010 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.217240 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296028 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296101 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296119 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296166 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399625 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399689 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399708 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399750 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503133 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503181 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503201 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.606888 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607028 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607055 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607088 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607114 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710170 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710259 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710281 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710328 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813112 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813249 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813275 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916626 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916698 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916712 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916735 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916748 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.019981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.020970 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021295 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124052 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124136 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124157 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124183 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124200 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226513 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226525 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226551 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328389 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328492 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431002 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431071 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431101 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534199 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534282 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534340 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534364 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636777 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636863 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636888 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636993 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.637379 5115 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740138 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740148 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740161 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740171 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.842965 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843073 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843102 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843139 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843164 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945739 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945758 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945769 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048748 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048836 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048933 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151713 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151738 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151764 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151782 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216116 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216183 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.216341 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216517 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.216856 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.217111 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.217164 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.217375 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254763 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254840 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254859 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254887 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254938 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.357937 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358022 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358077 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358104 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358122 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460748 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460811 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460827 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460846 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460863 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564226 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564305 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564348 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564369 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.666973 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667075 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667099 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667119 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770186 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770216 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770237 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873222 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873297 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873316 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873381 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976222 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976293 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976319 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976349 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976372 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079786 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079850 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079886 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079938 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182760 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182862 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182926 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182993 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285414 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285556 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285642 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285671 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388535 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388616 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491708 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491792 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491817 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491852 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491875 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594371 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594394 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594422 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594444 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697100 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697267 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800013 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800553 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800829 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.801078 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.801250 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.904670 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905170 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905398 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905647 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.906026 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009031 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009457 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009848 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.010117 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.113266 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114003 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114040 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114066 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114084 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216144 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216413 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216474 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.216777 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.216987 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.217093 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.217260 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218845 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218922 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218943 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.219015 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.219069 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.220547 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:50 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:50 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:50 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:50 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:50 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:50 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:50 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:50 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:50 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:50 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:50 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:50 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: } Jan 20 09:09:50 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:50 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:50 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:50 crc kubenswrapper[5115]: done Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:50 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:50 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:50 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:50 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:50 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:50 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.224610 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:50 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:50 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:50 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:50 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:50 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:50 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:50 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:50 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:50 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:50 crc kubenswrapper[5115]: else Jan 20 09:09:50 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:50 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:50 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:50 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:50 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:50 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:50 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.227282 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.233457 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.250848 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.262098 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.290384 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.306111 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.318194 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.321934 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.321969 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322006 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322020 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322029 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.334142 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.344674 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.355828 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356156 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356614 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.372470 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.373467 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378563 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378658 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378686 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378706 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.394856 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.400308 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412577 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412678 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412696 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.428586 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.433433 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.447355 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.447958 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448024 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448041 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448064 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448079 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.469732 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473837 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473905 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473916 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.474423 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.483906 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.484023 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485437 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485468 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485490 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485499 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.486018 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.499107 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.511332 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.519118 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.522396 5115 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.527822 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.539075 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589313 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589369 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692137 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692200 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692220 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692246 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692264 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794675 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794767 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794818 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794847 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897762 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897843 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897858 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897885 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897945 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000523 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000535 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000560 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000572 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.102963 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103035 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103049 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103072 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103089 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206439 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206501 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206510 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206532 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206545 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311004 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311049 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311061 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311080 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311091 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413428 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413500 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413532 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413546 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515771 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515785 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515804 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515817 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618253 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618317 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618559 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.672366 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bht7q" event={"ID":"650d165f-75fb-4a16-a8fa-d8366b5f6eea","Type":"ContainerStarted","Data":"8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.678268 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tt8v" event={"ID":"92f344d4-34bc-4412-83c9-6b7beb45db64","Type":"ContainerStarted","Data":"ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.681757 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.686564 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.698158 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.711005 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.721645 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722358 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722423 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722515 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722536 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.755945 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.772001 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.785539 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.798476 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.807326 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.820169 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825201 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825279 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825299 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825328 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825346 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.836841 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.848042 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.861218 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.871389 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.883714 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.893366 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.904259 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.913201 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928230 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928248 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928269 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928288 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.934430 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.948008 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.959526 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.973021 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.985148 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.001058 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.011919 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031155 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031172 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031181 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.046311 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.058564 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.071208 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.085833 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.098785 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.129416 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134617 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134726 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134779 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134810 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134826 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.146148 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.162441 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.175641 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.189269 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.205787 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216685 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216791 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216722 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.216972 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.217107 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.217360 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.218010 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.218360 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.223585 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238674 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238787 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238810 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238841 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238860 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.243507 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342662 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342736 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342754 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342802 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446539 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446604 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446659 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446682 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550003 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550057 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550068 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550085 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550094 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653398 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653503 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653520 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.687223 5115 generic.go:358] "Generic (PLEG): container finished" podID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" containerID="7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286" exitCode=0 Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.687344 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerDied","Data":"7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.689882 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.689960 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.714365 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.726124 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.738759 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.753751 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756235 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756297 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756314 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756325 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.767271 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.779798 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.793766 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.805633 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.816235 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.826592 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.840630 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.851574 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858489 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858548 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858581 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858595 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.864274 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.872685 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.893341 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.906354 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.920773 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.932632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.941848 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.950877 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961244 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.963986 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.975574 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.987179 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.998917 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.025961 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.040695 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.054970 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064554 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064607 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064652 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.065815 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.097387 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.130843 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.145290 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.159415 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166910 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166970 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166986 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166998 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.170436 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.179024 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.192924 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.207250 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.216918 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.217175 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.218274 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.232929 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270854 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270942 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270958 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270997 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.373977 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374016 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374025 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374041 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374050 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476818 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476880 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476952 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476964 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579499 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579564 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579579 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579604 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579620 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.682764 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683234 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683247 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.694222 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjql7" event={"ID":"f41177fd-db48-43c1-9a8d-69cad41d3fab","Type":"ContainerStarted","Data":"a865a33344a91fb61ba891497bd1d13a6849531c298102a1405e220a44d2933e"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.696133 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"524bafe9b9fb2826c32ba260baaac1dd3bdacd715c281152d61af32d4919eba0"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.696226 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"4fe1f9bd2203e20099f4a6f3c4a22df44a05e962178d45b9a0fa66ab33395af9"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.710600 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.723922 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://a865a33344a91fb61ba891497bd1d13a6849531c298102a1405e220a44d2933e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.733815 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.756334 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.767833 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.779169 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.797551 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799263 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799355 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.809919 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.826027 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.840873 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.852695 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.863423 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.872694 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.888638 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.900477 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902687 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902772 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902792 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902814 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902859 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.916042 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.924921 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.947735 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952920 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952960 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952988 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953067 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953102046 +0000 UTC m=+120.121880576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953194 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953352 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953314672 +0000 UTC m=+120.122093192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953367 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953402 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953417 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953459 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953449555 +0000 UTC m=+120.122228565 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.960616 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005608 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005683 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005697 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.053843 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054118 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054077802 +0000 UTC m=+120.222856342 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.054236 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.054341 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054501 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054578 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054560276 +0000 UTC m=+120.223338806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054499 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054632 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054646 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054720 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054701029 +0000 UTC m=+120.223479559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108623 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108690 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108702 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108719 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108731 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.211806 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212369 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212389 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212415 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212430 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216256 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216288 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216393 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216404 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216496 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216710 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.217598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.217747 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314384 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314438 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314473 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428197 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428256 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428272 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428292 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428305 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531196 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531208 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531224 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531256 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633481 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633577 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633622 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633631 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.702543 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"42de74bd48899fc57520fc4e45923690712aec29576a30790a2275dad3b7e5f9"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.702626 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4086c3ea2d85e4b296e8536fac149813e0d785aca75891f55621eeb44af23813"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.704265 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="1c01d6e379df67f685800890a1c7d12280aee6039416a2bf9a5ef2225e972142" exitCode=0 Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.704378 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"1c01d6e379df67f685800890a1c7d12280aee6039416a2bf9a5ef2225e972142"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710276 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"e27e5e9fbb542a35e148c108a51be897d3bad20213ec443e846c659fd47daab6"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710326 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"3e2ae5d7fdc947424efda094b0ac4baf576f59e1e70b2b229386f40b16262dbb"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710344 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"78337bb7c60f2c9302a636e3343c0c887f813ab04815aef94f1ce3af7d9061d2"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710362 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"a30d3012d5497a1a5c437ee4a4e23ed164c589507f56546cf0ae81558d2146cb"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742155 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742215 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742233 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742257 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742274 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.796715 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.796678613 podStartE2EDuration="17.796678613s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.773401029 +0000 UTC m=+104.942179569" watchObservedRunningTime="2026-01-20 09:09:54.796678613 +0000 UTC m=+104.965457193" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.818071 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.818043985 podStartE2EDuration="17.818043985s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.796600331 +0000 UTC m=+104.965378891" watchObservedRunningTime="2026-01-20 09:09:54.818043985 +0000 UTC m=+104.986822525" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.818252 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.818246041 podStartE2EDuration="17.818246041s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.818193889 +0000 UTC m=+104.986972469" watchObservedRunningTime="2026-01-20 09:09:54.818246041 +0000 UTC m=+104.987024581" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846393 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846426 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846439 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.924202 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xjql7" podStartSLOduration=83.92417931 podStartE2EDuration="1m23.92417931s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.923829031 +0000 UTC m=+105.092607611" watchObservedRunningTime="2026-01-20 09:09:54.92417931 +0000 UTC m=+105.092957830" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949216 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949265 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949275 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949288 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949299 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.969237 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5tt8v" podStartSLOduration=83.969221887 podStartE2EDuration="1m23.969221887s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.938577886 +0000 UTC m=+105.107356406" watchObservedRunningTime="2026-01-20 09:09:54.969221887 +0000 UTC m=+105.138000407" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.969437 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.969432542 podStartE2EDuration="16.969432542s" podCreationTimestamp="2026-01-20 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.968860147 +0000 UTC m=+105.137638667" watchObservedRunningTime="2026-01-20 09:09:54.969432542 +0000 UTC m=+105.138211062" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.015140 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podStartSLOduration=84.014927702 podStartE2EDuration="1m24.014927702s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:55.014744786 +0000 UTC m=+105.183523316" watchObservedRunningTime="2026-01-20 09:09:55.014927702 +0000 UTC m=+105.183706272" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.028327 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bht7q" podStartSLOduration=84.02830191 podStartE2EDuration="1m24.02830191s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:55.027047946 +0000 UTC m=+105.195826506" watchObservedRunningTime="2026-01-20 09:09:55.02830191 +0000 UTC m=+105.197080480" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.053768 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054257 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054350 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054443 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054525 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156258 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156555 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156679 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156760 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156826 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.259776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260284 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363431 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363536 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363600 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363626 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466073 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466143 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466207 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569368 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569472 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671957 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671990 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.672013 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.718236 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="93e19dc8e1e75dbba4d59a1fe5d94c21410eba7cde11cc778bff185c983d2dde" exitCode=0 Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.718356 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"93e19dc8e1e75dbba4d59a1fe5d94c21410eba7cde11cc778bff185c983d2dde"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777230 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777291 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777312 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777332 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777346 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880397 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880458 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880471 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880492 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880508 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.983924 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.983972 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984000 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984023 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984035 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086587 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086610 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086629 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189466 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189575 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189632 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189681 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.216683 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.216683 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.216821 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217000 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.217047 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217434 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.217063 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217745 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292238 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292313 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395550 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395596 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395616 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395642 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395659 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498475 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498493 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498517 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498535 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.600986 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601062 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601085 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601113 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601133 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703707 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703788 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703809 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703833 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703852 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.724321 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"a93ec71993e4c56239bbf76149ff10cda2f8e68e538501dda45a8338b48de997"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.727801 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="23638954be69df0286611e0aaf546639ef982b5aeb0d53cb2de8d34c8a7ed899" exitCode=0 Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.727881 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"23638954be69df0286611e0aaf546639ef982b5aeb0d53cb2de8d34c8a7ed899"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.737769 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"c7cecfc3dfcd46299a42d88a01cb68349ccf193c1d236e51c02d572d961be382"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811114 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811173 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811198 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811229 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811252 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914598 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914752 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914777 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914802 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914851 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017719 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017741 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017770 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017788 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120650 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120700 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120711 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120726 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120737 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224241 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224270 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224293 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326329 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326354 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326376 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428685 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428829 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531880 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531931 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531976 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635015 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635107 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635161 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635185 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738283 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738367 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738394 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738427 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738455 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.745683 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="1a13633dc4f230ffbd25769764224ca0d8e8fb1608692912319b0741bae6f275" exitCode=0 Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.745811 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"1a13633dc4f230ffbd25769764224ca0d8e8fb1608692912319b0741bae6f275"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841876 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841957 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841976 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944400 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944455 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944496 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944513 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047485 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047552 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047570 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047589 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047604 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150614 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150704 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150736 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150753 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216546 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216550 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.216729 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216760 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216807 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.216956 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.217060 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.217126 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253212 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253250 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253259 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253273 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253283 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355483 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355558 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355578 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355623 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458231 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458306 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458331 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458360 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458383 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561093 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561157 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561177 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561222 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.664757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665314 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665372 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665395 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.753101 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="7146d1371e225c859e189baf0ecb8196a4c61a5eb99820fa325e1ffbd66a1630" exitCode=0 Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.753190 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"7146d1371e225c859e189baf0ecb8196a4c61a5eb99820fa325e1ffbd66a1630"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.761954 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"f6b7022d24953d48ed1163c056d62ecaac06c48fcd940ff10ada258fd284089a"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767657 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767731 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767751 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767781 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767807 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871359 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871379 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871403 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871420 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975164 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975290 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.070278 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.070311 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078544 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078566 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078595 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078614 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.122485 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180739 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180794 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180823 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180836 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.197772 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podStartSLOduration=88.197750514 podStartE2EDuration="1m28.197750514s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:59.144017824 +0000 UTC m=+109.312796354" watchObservedRunningTime="2026-01-20 09:09:59.197750514 +0000 UTC m=+109.366529044" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283360 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283441 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283467 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283499 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283522 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386692 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489190 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489258 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489276 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489300 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489319 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592025 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592198 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592245 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592992 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696042 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696105 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696116 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696131 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696141 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.783737 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.784688 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798175 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798220 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798231 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798246 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798261 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.824127 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.900953 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901357 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901367 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004134 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004144 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004158 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004168 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106753 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106809 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106822 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106843 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106855 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209338 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209395 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209440 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209461 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.220782 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.220966 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.221013 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221074 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221116 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.221164 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221257 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221241 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311693 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311766 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414269 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414287 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414312 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414330 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517755 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517813 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517831 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545811 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545837 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545882 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.614217 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g"] Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.618293 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622037 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622245 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622133 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622770 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.744990 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745131 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745246 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745426 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792510 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045" exitCode=0 Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792619 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792720 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"0d144c564cfd9c4e5c2b3e6a6e8aec9fd0bf91968d03c1108dace4a16ebb1542"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847445 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847613 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847638 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847782 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847795 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847868 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847984 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.849600 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.867553 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.878120 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.939015 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: W0120 09:10:00.964085 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b38190_bc9a_4748_b5b6_58629c825842.slice/crio-60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a WatchSource:0}: Error finding container 60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a: Status 404 returned error can't find the container with id 60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.206625 5115 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.219297 5115 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.797189 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"25556cd52edb7e5bee63322ae43421b7d2f5eb1221d6ec086899b092f9060931"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.797533 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"0c87f2a3b1054bd63ba4b0c7f603ff4c686d5a70069129f3faeb23682d7b2e1e"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.799414 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" event={"ID":"16b38190-bc9a-4748-b5b6-58629c825842","Type":"ContainerStarted","Data":"8ac0cec58a9ec028f90b173038747a513529d2e879c00c45c86f856848377713"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.799488 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" event={"ID":"16b38190-bc9a-4748-b5b6-58629c825842","Type":"ContainerStarted","Data":"60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.825807 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podStartSLOduration=90.825790671 podStartE2EDuration="1m30.825790671s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:00.821452467 +0000 UTC m=+110.990231037" watchObservedRunningTime="2026-01-20 09:10:01.825790671 +0000 UTC m=+111.994569201" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.826074 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podStartSLOduration=90.826068629 podStartE2EDuration="1m30.826068629s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:01.823956683 +0000 UTC m=+111.992735213" watchObservedRunningTime="2026-01-20 09:10:01.826068629 +0000 UTC m=+111.994847159" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.827780 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tzrjx"] Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.827992 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:01 crc kubenswrapper[5115]: E0120 09:10:01.828101 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216187 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216315 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216328 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216367 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216434 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216674 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:03 crc kubenswrapper[5115]: I0120 09:10:03.216732 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:03 crc kubenswrapper[5115]: E0120 09:10:03.216878 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216577 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216649 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.216746 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216798 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.217156 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.218353 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.218777 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.217043 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:05 crc kubenswrapper[5115]: E0120 09:10:05.217669 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.833942 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.836631 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53"} Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.837412 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.866431 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" podStartSLOduration=94.866410634 podStartE2EDuration="1m34.866410634s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:01.846424425 +0000 UTC m=+112.015202995" watchObservedRunningTime="2026-01-20 09:10:05.866410634 +0000 UTC m=+116.035189184" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.866929 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.866922567 podStartE2EDuration="27.866922567s" podCreationTimestamp="2026-01-20 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:05.86624821 +0000 UTC m=+116.035026780" watchObservedRunningTime="2026-01-20 09:10:05.866922567 +0000 UTC m=+116.035701107" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.875644 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.875888 5115 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.917373 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.106846 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.107051 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.109969 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.110099 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.112532 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.112603 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113728 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113908 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113931 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113949 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114009 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114609 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114837 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.116262 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.118439 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.119138 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.120714 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.122986 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.123222 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.123622 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.125447 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.126860 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127368 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127676 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127746 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128554 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128805 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128865 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131248 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131679 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131697 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131748 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132125 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132171 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132320 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132410 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132613 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132654 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132771 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132965 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133270 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133414 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133668 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134100 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134367 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134555 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134612 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134661 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134763 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134983 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.140397 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.140821 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.144610 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.147108 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.147724 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.150109 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.151576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162083 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162664 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162768 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163694 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164049 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164457 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164594 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164597 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164953 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166376 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166748 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166169 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.170480 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.170778 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.172364 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.173095 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.173336 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.175039 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.178818 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179175 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179677 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179880 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.180114 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.180424 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.181135 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.181501 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184387 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184609 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184926 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185086 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185274 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185393 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185666 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185710 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185845 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185875 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185982 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186000 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186113 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186147 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186243 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.187007 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.187576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.190022 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.194420 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.194767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.200042 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.201003 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.206850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.207598 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.208241 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.209196 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.216226 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.217301 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.217544 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220026 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220134 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220597 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220660 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220703 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220736 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220995 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221068 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221116 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221238 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221280 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221315 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221348 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221382 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221429 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221492 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221577 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221654 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221691 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221752 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222184 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222324 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222532 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222573 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222624 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222660 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222774 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222808 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222873 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222941 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.223520 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.224283 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.224604 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222972 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228377 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228421 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228461 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228521 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228574 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228596 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228634 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228661 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228693 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228719 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228770 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228791 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228806 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228831 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228848 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228864 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228887 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228942 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228972 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228992 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230070 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230206 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.233649 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.234535 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.238458 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.243967 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.244357 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.244547 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.245057 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.245439 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.247038 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249264 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249493 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249925 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.250321 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.251839 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.252149 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.254500 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.254634 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.257268 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.257476 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.260106 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.260209 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.265097 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.265239 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.268032 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.268170 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.270845 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-n9hxc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.270975 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.276360 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.276825 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.279728 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.280006 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.285403 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291104 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291137 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291617 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.292006 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.292385 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.296753 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.296958 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.299768 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.299927 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.302684 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.302757 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.307657 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.307838 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.310716 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.310823 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.311086 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.313617 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.313756 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.316557 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.316624 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319446 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319469 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319626 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321934 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321956 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321968 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mg52n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.322093 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.324587 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.324726 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327289 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327312 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327323 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327335 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327346 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327357 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327369 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327379 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327394 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327405 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327417 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327431 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327442 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327452 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327462 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327473 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327482 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327495 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327418 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327505 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327516 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327527 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327539 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329439 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329637 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329676 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329707 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329727 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329746 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329766 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329783 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329814 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329832 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329851 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329869 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329885 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329943 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329969 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329990 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330010 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330029 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330085 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330108 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330146 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330203 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330244 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330268 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330297 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330316 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330341 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330527 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.331374 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.331485 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332574 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332686 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332755 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332860 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332926 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332958 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332982 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332979 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333005 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333031 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333069 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333097 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333117 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333146 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333165 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333197 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333224 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333237 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333248 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333338 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333383 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333405 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333421 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333464 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333484 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333521 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333559 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333616 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333633 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333648 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333713 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333741 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333766 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333778 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333791 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333854 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333912 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333938 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333966 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333998 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334048 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334104 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334149 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334175 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334195 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334221 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334254 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334264 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334280 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334293 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334332 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334365 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334393 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334418 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334456 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334479 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334504 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334590 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334621 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337061 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337187 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337383 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333070 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337965 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337990 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338032 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338746 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338922 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339122 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339219 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339507 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339659 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339993 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340004 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340113 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340298 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340484 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340498 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.341677 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.341840 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342100 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342276 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342392 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342416 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342428 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342440 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342451 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342463 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342474 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342484 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342496 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342506 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342514 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342525 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342535 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342545 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342557 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342668 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342790 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.343162 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.344740 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.344884 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.345595 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.346116 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.347669 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.349868 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350796 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350963 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.351182 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.351835 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.354051 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.356762 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.372568 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.389120 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.409539 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.430120 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435352 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435466 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435581 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436180 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436457 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436627 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436705 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436913 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437006 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437083 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437216 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437288 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437358 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437430 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437583 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437663 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437717 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436287 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437920 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.438044 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.438139 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439975 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440167 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440214 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440288 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440326 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439286 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441159 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439709 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441361 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437023 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441502 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441825 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441836 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.442195 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.443028 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.443661 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444510 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444518 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444970 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.445088 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.445273 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.450618 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.452800 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.453404 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.469992 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.476217 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.510608 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.517034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.530948 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.550635 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.570069 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.589939 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.610616 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.630089 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.650431 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.679701 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.690319 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.709918 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.730331 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.750019 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.769863 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.790623 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.810382 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.829949 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.851342 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.870789 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.891404 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.910133 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.929910 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.950776 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.971476 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.990850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.012575 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.032349 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.051202 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.071803 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.091396 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.110944 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.130696 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.150466 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.169857 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.210807 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.215869 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.231455 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.250936 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.270006 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.288258 5115 request.go:752] "Waited before sending request" delay="1.010586692s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.289832 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.311229 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.330080 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.350343 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.369801 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.390836 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.410284 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.429204 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.451203 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.469311 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.489482 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.509601 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.530013 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.549937 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.571525 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.590064 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.609568 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.629504 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.650117 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.669450 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.690124 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.710194 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.729734 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.750876 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.770119 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.790978 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.811260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.830348 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.861523 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.869729 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.890118 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.910198 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.929999 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.950534 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.256017 5115 request.go:752] "Waited before sending request" delay="2.933686479s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.259967 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.292460 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.293118 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.293997 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.294304 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.306522 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308635 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308880 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309082 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309241 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309593 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309795 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309957 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310274 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310376 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309971 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310813 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310881 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.311159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.311354 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.313102 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.314257 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.315624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.315722 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.317171 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.317431 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.320280 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.330784 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.331102 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.331165 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.334401 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.335078 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.338189 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.342973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.343766 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.343766 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.344482 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.345197 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.354680 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.384763 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.384825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390008 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390456 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390605 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390695 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390980 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.391003 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.890983957 +0000 UTC m=+120.059762487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.391378 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.434210 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.441917 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.475375 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.488296 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.494785 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496498 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.496532 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.996479245 +0000 UTC m=+120.165257775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496799 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496856 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496885 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496941 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496956 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497009 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497026 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497048 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497087 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497107 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497171 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497187 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497201 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497247 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497266 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497327 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497371 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497795 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497859 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497880 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497928 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497954 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497979 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498000 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498021 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498045 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498068 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498636 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498679 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498701 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498733 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498755 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498776 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499092 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499114 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499153 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499178 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499200 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499231 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499256 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499278 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499410 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499456 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499494 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499516 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.501074 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.001053407 +0000 UTC m=+120.169831997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.502197 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506392 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506463 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506539 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506644 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506666 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506685 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506708 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506736 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506758 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506843 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506875 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506930 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506953 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506989 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507007 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507111 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507134 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507157 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507177 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507217 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507242 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507263 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507542 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507290 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508098 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508153 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508175 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508196 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508268 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508290 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508357 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508389 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508417 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508432 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508450 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508479 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508496 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508562 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508585 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508604 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508625 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508645 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508661 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508714 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511375 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511426 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511449 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511489 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512312 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512812 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512878 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513219 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513470 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513627 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513688 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.515948 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.518639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.528375 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.530435 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.533587 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.539235 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.539349 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.553039 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.561180 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.570969 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.581275 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.603522 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.624871 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625086 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625139 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625158 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.625293 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.125266505 +0000 UTC m=+120.294045035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626107 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626110 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626165 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626193 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626211 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626226 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626247 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626263 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626281 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626299 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626317 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626340 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626363 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626380 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626398 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626416 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626459 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626476 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626493 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626511 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626534 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626552 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626573 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626601 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626622 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626638 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626655 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626675 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626692 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626712 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626727 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626751 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626769 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626800 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626818 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626835 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626851 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626879 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.632118 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637317 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637383 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637410 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637432 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637499 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637535 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637555 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637564 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637594 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637712 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637802 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637840 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637869 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637908 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637932 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637976 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637993 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638020 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638038 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638054 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638071 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638105 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638121 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638137 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638176 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638212 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638237 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638259 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638286 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638301 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638326 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638352 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638374 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638406 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638426 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638442 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638474 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638500 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638537 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638554 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638573 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638605 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638623 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638662 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638679 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.639202 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.639505 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640115 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640124 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640622 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640852 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642295 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642364 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642565 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.643301 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.643393 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645086 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645678 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645838 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645870 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.646305 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.647050 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.647277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648031 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648373 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648747 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.649099 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.650032 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.651138 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.652253 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.653375 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.653744 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.654524 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655222 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655459 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655525 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655665 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.657065 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.657843 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658177 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658553 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658998 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658996 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.659471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.660170 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.661585 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.662503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.662723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673664 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673721 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674146 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674403 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674750 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.676215 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.677401 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.682034 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.682625 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.684062 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.685487 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.685536 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.688506 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.691418 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.692687 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.692888 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.694007 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.695718 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.696264 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.699688 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.701367 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.701500 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.710201 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.714578 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.728293 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.740817 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.742070 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.242053105 +0000 UTC m=+120.410831635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.752785 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.787195 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.808723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.811252 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.824192 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.828581 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.844446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.845124 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.345107747 +0000 UTC m=+120.513886277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.847826 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.861265 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.872043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.872310 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.887368 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.892467 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.902009 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.913090 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.918841 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.928659 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.946742 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.946947 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.947569 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.447548392 +0000 UTC m=+120.616326912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.955680 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.962607 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.962816 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.971931 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.989707 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.000767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.006388 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.010809 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.026624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.033416 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.049954 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050118 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050164 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050260 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.055886 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.059287 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.559245505 +0000 UTC m=+120.728024035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.059854 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.065652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.078703 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.085048 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.086974 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.168593 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.168996 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170528 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170685 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170876 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.171447 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.671432432 +0000 UTC m=+120.840210962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.186711 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.186946 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.197478 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.213171 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.213223 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.215840 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.225140 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.229820 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.238149 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.251612 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.274971 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.277565 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.287308 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.288448 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.289223 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.789195718 +0000 UTC m=+120.957974268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.311457 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.331914 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.332315 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.333465 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.335990 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.337143 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.338101 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.340166 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.351697 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.352137 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.359397 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerStarted","Data":"857692043d4e2a0e52ae73c61d049790e037f8377cfd4c3084e2ea0725ae7c00"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.364412 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.370323 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"28881063122d7a14f5feacf8a2ef22fe6f63494735a9de7c64a1cb7fda57c7c1"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.370605 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.372714 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"518c872fec22cdd51a60c393a62a1da97b3362200d0830aef601a474fdfaf4fa"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.376535 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" event={"ID":"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001","Type":"ContainerStarted","Data":"030b057e9627fccd8c29ccbdbe6505fc414132ec82d49743a05995e6e529362c"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.387127 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"d77de929bd750a51c458f2d847183c40a060993b3059e0085b0e307e7f3cd220"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.387249 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.389988 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.390147 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.391348 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.891332075 +0000 UTC m=+121.060110605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.392021 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mg52n" event={"ID":"273a5bb6-cb84-41ee-a44a-ee5bc13291f5","Type":"ContainerStarted","Data":"96e6ecc379e774b84bc4108889c42fdb721fe098da26e1b5d8de869c31ec8352"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.396462 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"3582dc0a54cf6707b7c404a3d8a5a811a81b42edeb4908a47674f0f62dcb4252"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.430475 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.433640 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.450329 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.463937 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.470887 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.474752 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.490931 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.497152 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.499931 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.500208 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.000191412 +0000 UTC m=+121.168969932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.512343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.532759 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.540917 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.543759 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.571564 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.576346 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.591264 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.599767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.612634 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.613086 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.113073067 +0000 UTC m=+121.281851597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.656953 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.660102 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.698031 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.713650 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.713951 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.21392601 +0000 UTC m=+121.382704540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.714140 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.715632 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.724930 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.727543 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.748689 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.750424 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.758598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.815379 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.816105 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.816501 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.316487428 +0000 UTC m=+121.485265958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.857076 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.864757 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.870036 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.899634 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.910222 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.918002 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.918191 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.418158363 +0000 UTC m=+121.586936893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.918619 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.919010 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.418996785 +0000 UTC m=+121.587775315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.937869 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.945420 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.987641 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.995696 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.996487 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.021799 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.022259 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.522229262 +0000 UTC m=+121.691007792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.031417 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.105700 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.126256 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.126651 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.62663638 +0000 UTC m=+121.795414910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.133213 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ac66ad_91ae_4ffd_b159_a7549ca71803.slice/crio-1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee WatchSource:0}: Error finding container 1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee: Status 404 returned error can't find the container with id 1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.143164 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.151507 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.162479 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.205226 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41303d0_06e3_4554_8fa9_d9dd935d0bec.slice/crio-8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721 WatchSource:0}: Error finding container 8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721: Status 404 returned error can't find the container with id 8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.227716 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.228265 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.728233812 +0000 UTC m=+121.897012342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.329625 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.329988 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.829974349 +0000 UTC m=+121.998752879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.415267 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" event={"ID":"80f8b6d4-7eb4-42ec-9976-60dc6db3148f","Type":"ContainerStarted","Data":"6114d050ba7344d59c20b4fa5ae32d642e9f03de9e9fd3b6ffa138c4bb1446bc"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.420055 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerStarted","Data":"9867aaf1ba54f7e1ce8f653f72cd6cf2e28d74cb1e668f9b7eeaed47fded789e"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.430855 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.431091 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.931073078 +0000 UTC m=+122.099851608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.448228 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mg52n" event={"ID":"273a5bb6-cb84-41ee-a44a-ee5bc13291f5","Type":"ContainerStarted","Data":"4605b88333a42c6e823c3d40d543d9980763fa08927d988bd0e2e56767eedd6a"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.450081 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" event={"ID":"676675d9-dafb-4b30-ad88-bea33cf42ce0","Type":"ContainerStarted","Data":"d6d1ac4732cac18428ca5e1d1a0149baceff522aaa8a04805ddda01d65ae2590"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.456028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" event={"ID":"8c6ba355-2c21-431c-8767-821fb9075e1c","Type":"ContainerStarted","Data":"c816df60d98c33bc0e07d0d9de360f95708feb6803ec0bb65b3ab842fdaff3a3"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.462286 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"72f38a0ec4f70000765596eb43cfb1e0c64fd21da9d939639f480b7449581947"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.481813 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.500627 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"18f912f06c59235f5286c2791410fc92fae0eb44ec230d126606b127da4b7da1"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.512878 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.522921 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"7493bb218232c14833a0d0e5ff7d7bb0ca7ac7cf70738d52fdfad65e8f29b11b"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.532037 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.532756 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.032722093 +0000 UTC m=+122.201500623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.549397 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-glkw9" event={"ID":"26f7f00b-d69c-4a82-934c-025eb1500a33","Type":"ContainerStarted","Data":"8169ee48989da8ea1ff65ce4251b7d218c2b534157c42ad297050c8c1d400ace"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.553362 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.553992 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerStarted","Data":"fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.555610 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.561693 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" event={"ID":"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001","Type":"ContainerStarted","Data":"cb0da4370fa77a1149c8f2a607bf8df68c81e9f933d2b66a7582a5aa0c2c537e"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.569049 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"f3ae81c048828e0c39763c124b388b8386275dc126be13f23cc4ccd2cea78545"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.585287 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" event={"ID":"f7ec9898-6747-40af-be60-ce1289d0a4e6","Type":"ContainerStarted","Data":"b9c1c69cae88c3eda2c866da436570e149ec0926e969e41af36f800b4b17e8d2"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.618744 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"b189332696850039ab1e02dbf24c0846f856d8e8e03a2617ef610a91dc248488"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.642194 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-78z8z" event={"ID":"9aa837bd-63fc-4bb8-b158-d8632117a117","Type":"ContainerStarted","Data":"614ee2002a75d6767f8e7c9e2e61360d9d5634b79bcdff3e785ae86a4ca4784f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.648916 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.650423 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.150404396 +0000 UTC m=+122.319182926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.664306 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"6accd21ea1a6aea1f1180aaa76aba5788b55ce9fe6f0b7abce3037f0ddd5e615"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.724718 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l96rs" event={"ID":"3b4463ed-eba2-4ba4-afb8-2424e957fc37","Type":"ContainerStarted","Data":"cd8c9fbae6d4c0be2c484010ceebdccef1db13489561034656c019cfeef3118d"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.735634 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.739989 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.742625 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.759505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.760113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.260088305 +0000 UTC m=+122.428866885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.761725 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"75164ea9a8f551d0afa06a4acb1db1e5d2a11d5cf9890414d91b7fac237bc02f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.765795 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"8d9d568901e811390357ab7a382f52584a24353ac4bdff85a472110157eb50ec"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.767955 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerStarted","Data":"11a76b2995d1e7821d8b5caa00d0b12a5012c7b092dc0a7b36b27b7457c6f577"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.775213 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" event={"ID":"f41303d0-06e3-4554-8fa9-d9dd935d0bec","Type":"ContainerStarted","Data":"8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.776172 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.779584 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"ba9e935cd9dbcccba3373b56114fb5112e6bd4ddbcf850c03f77ef25fb786214"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.817162 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" event={"ID":"dd3b472c-53e1-402a-ad30-244ea317f0e1","Type":"ContainerStarted","Data":"052fb4a983594ca74b3c2bc30d9134a6df6bc99ff8ec5a84f95c27e0f435b3c3"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.837431 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ljj2s" event={"ID":"b9ac66ad-91ae-4ffd-b159-a7549ca71803","Type":"ContainerStarted","Data":"1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.839254 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tzrjx"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.856738 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.859123 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" event={"ID":"603cfb78-063c-444d-8434-38e8ff6b5f70","Type":"ContainerStarted","Data":"ea10f8ee9b6eace2f54e544b5c883889c4598fce326bb396b8ef1d49b04cbd33"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.860689 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.860946 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.360919367 +0000 UTC m=+122.529697897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.862478 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.863165 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.363150128 +0000 UTC m=+122.531928658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.869745 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerStarted","Data":"cbd08ab0a2c4c0818dcbd527faa0be5b5f4a1bad92f6532575218bb39ed5a760"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.882586 5115 generic.go:358] "Generic (PLEG): container finished" podID="72f63421-cfe9-45f8-85fe-b779a81a7ebb" containerID="09e7fbda6c3e08fc45d4926c3ac4784e0e44c9fd8ef813f3b805e0113141078f" exitCode=0 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.882665 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerDied","Data":"09e7fbda6c3e08fc45d4926c3ac4784e0e44c9fd8ef813f3b805e0113141078f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.887617 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"c895c4a8b8266caaaf889d03a2ee164cf3d7cff1e696bc8858d256b77c671370"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.897302 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" event={"ID":"21e183fd-a881-4f61-a726-bcaaf60e71d5","Type":"ContainerStarted","Data":"a35b70628ae9545bde82275cb2462476256b0d2876d7d3b3a4fc47c22ba825ab"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.905803 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" event={"ID":"d702c0ea-d2bd-41dc-9a3a-39caacbb288d","Type":"ContainerStarted","Data":"92f7e8dc1afc55c246a3b6503fab8e7d7e7733acdb5d01763bcda6166ac74ec1"} Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.907632 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb1b469_4758_499e_a0ba_8204058552be.slice/crio-51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5 WatchSource:0}: Error finding container 51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5: Status 404 returned error can't find the container with id 51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.912131 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerStarted","Data":"41ea8c623ecacb84e93a0bb70429c6d21f2263332366f0ca16d5017167557e81"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.917564 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.933139 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" event={"ID":"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec","Type":"ContainerStarted","Data":"1f74e0c9554f8634c1b9f22b5a231966e157c8f60d4d46c7d458fa599c04679a"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.963481 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.963731 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.463695902 +0000 UTC m=+122.632474432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.967195 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.969834 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.469814786 +0000 UTC m=+122.638593306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: W0120 09:10:11.997482 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8f5093_1a2e_4c32_8c74_b6cfb185cc99.slice/crio-884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf WatchSource:0}: Error finding container 884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf: Status 404 returned error can't find the container with id 884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.027922 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.072446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.072989 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.57295436 +0000 UTC m=+122.741732890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.175993 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.176277 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.676265388 +0000 UTC m=+122.845043918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.219059 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mg52n" podStartSLOduration=7.219029774 podStartE2EDuration="7.219029774s" podCreationTimestamp="2026-01-20 09:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.170779232 +0000 UTC m=+122.339557762" watchObservedRunningTime="2026-01-20 09:10:12.219029774 +0000 UTC m=+122.387808304" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.273065 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" podStartSLOduration=101.273038472 podStartE2EDuration="1m41.273038472s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.206316294 +0000 UTC m=+122.375094824" watchObservedRunningTime="2026-01-20 09:10:12.273038472 +0000 UTC m=+122.441816992" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.295887 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.296398 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.796372767 +0000 UTC m=+122.965151297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.341156 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" podStartSLOduration=101.341135916 podStartE2EDuration="1m41.341135916s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.27334496 +0000 UTC m=+122.442123490" watchObservedRunningTime="2026-01-20 09:10:12.341135916 +0000 UTC m=+122.509914446" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.405102 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" podStartSLOduration=101.405085121 podStartE2EDuration="1m41.405085121s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.401842504 +0000 UTC m=+122.570621034" watchObservedRunningTime="2026-01-20 09:10:12.405085121 +0000 UTC m=+122.573863651" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.408951 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.409290 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.909276373 +0000 UTC m=+123.078054903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.432703 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podStartSLOduration=6.43267945 podStartE2EDuration="6.43267945s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.432500145 +0000 UTC m=+122.601278675" watchObservedRunningTime="2026-01-20 09:10:12.43267945 +0000 UTC m=+122.601457980" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.510347 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.510837 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.010808253 +0000 UTC m=+123.179586783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.615087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.615591 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.115571681 +0000 UTC m=+123.284350211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.716887 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.717048 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.21701558 +0000 UTC m=+123.385794110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.717503 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.718163 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.21815152 +0000 UTC m=+123.386930050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.821440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.821791 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.321771567 +0000 UTC m=+123.490550097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.923201 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.923681 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.423658947 +0000 UTC m=+123.592437477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.946210 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" event={"ID":"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec","Type":"ContainerStarted","Data":"c6fd3bff44fe50a0b58401d9b3c0bf164f6c001d24ee2c0d62551ade272e9815"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.964514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" event={"ID":"ecb1b469-4758-499e-a0ba-8204058552be","Type":"ContainerStarted","Data":"51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.968940 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerStarted","Data":"08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.973685 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" event={"ID":"ef29fedc-43ad-4cf5-b3ef-10a28c46842f","Type":"ContainerStarted","Data":"5f14158e429c6f169c167efc97ae7ee8cb13e746c4dee1db68d688c231a5e7e8"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.985653 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerStarted","Data":"883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.987563 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.994504 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ljj2s" event={"ID":"b9ac66ad-91ae-4ffd-b159-a7549ca71803","Type":"ContainerStarted","Data":"ceaf8b77d0526829ab984bb0b3daa675f7bb0100da4f269637f721e655cd2360"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.994845 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.002037 5115 generic.go:358] "Generic (PLEG): container finished" podID="0386fc07-a367-4188-8fab-3ce5d14ad6f2" containerID="becbfe546f7a2e1bb8cfdb84a57c1179541310157b00eb6f1280ed8ef84bf6c9" exitCode=0 Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.002224 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerDied","Data":"becbfe546f7a2e1bb8cfdb84a57c1179541310157b00eb6f1280ed8ef84bf6c9"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.016838 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" event={"ID":"f6477423-4b0a-43d7-9514-bde25388af77","Type":"ContainerStarted","Data":"aa1f902fe6f5d74d02915fedddde26e938ccda6a6fc790c74302819840debc56"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.025158 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.025805 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.525757203 +0000 UTC m=+123.694535733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.028812 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.029678 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.033094 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"3600dba173d1c61d6f6ab695b5a5c43e3072abb0d351f95623aa429868705043"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.050952 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podStartSLOduration=102.050935308 podStartE2EDuration="1m42.050935308s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.00806513 +0000 UTC m=+123.176843670" watchObservedRunningTime="2026-01-20 09:10:13.050935308 +0000 UTC m=+123.219713838" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.070845 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-ljj2s" podStartSLOduration=102.070826101 podStartE2EDuration="1m42.070826101s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.068731115 +0000 UTC m=+123.237509635" watchObservedRunningTime="2026-01-20 09:10:13.070826101 +0000 UTC m=+123.239604631" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.089312 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" event={"ID":"0d738dd6-3c15-4131-837d-591792cb41cd","Type":"ContainerStarted","Data":"fba3baf48c6183de048f0ec7d86881b0b0b8a0f79ebc580960b93f498caf9bee"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.113549 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerStarted","Data":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.115169 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.129602 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.129947 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.629917595 +0000 UTC m=+123.798696125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.139363 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4216ee5471b2d0b2c75950445b5235e7a9fbc11060878d69eea5c4d59ae91980"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.155109 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"c6c3b97da8685ad26a30368e912e4bd3bef88b40806986a26910beaaa8f0a9fb"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.158100 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" podStartSLOduration=102.15806995 podStartE2EDuration="1m42.15806995s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.155548142 +0000 UTC m=+123.324326672" watchObservedRunningTime="2026-01-20 09:10:13.15806995 +0000 UTC m=+123.326848480" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.193137 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerStarted","Data":"5fb596da1738dbe8416b2b3a595dc262a4288da61aa3303a2ea6eb0db0479d63"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.210406 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" event={"ID":"8c6ba355-2c21-431c-8767-821fb9075e1c","Type":"ContainerStarted","Data":"bfb694ead5c0258216ff138837d8130845e4622fc01c854a8d52dd93bbdfcdbc"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.211504 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.231092 5115 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-95nt8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.231207 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" podUID="8c6ba355-2c21-431c-8767-821fb9075e1c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.232199 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.233857 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.73383362 +0000 UTC m=+123.902612150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.248825 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d8310b84dc2c03d782dfa8f7355270550f4eccaa51192ceb47d2554a222451c1"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.263646 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" podStartSLOduration=102.263617848 podStartE2EDuration="1m42.263617848s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.261254044 +0000 UTC m=+123.430032574" watchObservedRunningTime="2026-01-20 09:10:13.263617848 +0000 UTC m=+123.432396398" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.283518 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-glkw9" event={"ID":"26f7f00b-d69c-4a82-934c-025eb1500a33","Type":"ContainerStarted","Data":"274a43812679da83ec8291c1b5343bdabb2bf7b42438e001c846d085d841b5cd"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.285070 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.308290 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ft42n" event={"ID":"fbc48af4-261d-4599-a7fd-edd26b2b4022","Type":"ContainerStarted","Data":"98f7d0441adb463cff7325f8b7fc2b1e1ae932d02f57de866d8c426324363283"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.316423 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l96rs" event={"ID":"3b4463ed-eba2-4ba4-afb8-2424e957fc37","Type":"ContainerStarted","Data":"f6c068df1f75021aa18603756618ed617463b2c511d0a4369a1370bafb29a458"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.335504 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.336758 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.339493 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.83947316 +0000 UTC m=+124.008251690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.349825 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"679bdfff9044d5b0da2632379142bdbb12d8f1e8613651726a7bfe0ea19fea0e"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.351993 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"7e302e1c59b3ab2f846eedc21557860d9acba1a085af62ab18debb1b64309de0"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.367333 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.368735 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.373122 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-l96rs" podStartSLOduration=102.373098251 podStartE2EDuration="1m42.373098251s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.37303199 +0000 UTC m=+123.541810520" watchObservedRunningTime="2026-01-20 09:10:13.373098251 +0000 UTC m=+123.541876781" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.379571 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-glkw9" podStartSLOduration=102.379564265 podStartE2EDuration="1m42.379564265s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.339329327 +0000 UTC m=+123.508107857" watchObservedRunningTime="2026-01-20 09:10:13.379564265 +0000 UTC m=+123.548342795" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.391473 5115 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9gfdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.391556 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.392039 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" event={"ID":"dd3b472c-53e1-402a-ad30-244ea317f0e1","Type":"ContainerStarted","Data":"66f82900b831f33022203ecf089c4daa28d84b6dd6f7ef70e57a1d524225d69d"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.412012 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" event={"ID":"603cfb78-063c-444d-8434-38e8ff6b5f70","Type":"ContainerStarted","Data":"017b835d494bf2f06496ea1392bd823f965eb975bf926493be6531367ca0aee4"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.423473 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"2e85b747ec70b384b615e5bce3ac0531dcd9c919954dd52eee1a50c51619135f"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.433981 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podStartSLOduration=102.42603817 podStartE2EDuration="1m42.42603817s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.408019927 +0000 UTC m=+123.576798457" watchObservedRunningTime="2026-01-20 09:10:13.42603817 +0000 UTC m=+123.594816700" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.435439 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" podStartSLOduration=102.435421082 podStartE2EDuration="1m42.435421082s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.42415108 +0000 UTC m=+123.592929610" watchObservedRunningTime="2026-01-20 09:10:13.435421082 +0000 UTC m=+123.604199612" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.438445 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.439267 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.939251674 +0000 UTC m=+124.108030204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.445199 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"4e36671eb92c313415eff2616557fe69414813757951555ee8cd7b78adb01ea2"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.523934 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"cf34808341ae10d73a36b6ee114824a2e212ee1211ead8c79b96024001089d11"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.541964 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.543397 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.043376065 +0000 UTC m=+124.212154595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.645521 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.647190 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.147144546 +0000 UTC m=+124.315923256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.661681 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.697193 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" podStartSLOduration=102.697165696 podStartE2EDuration="1m42.697165696s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.555261713 +0000 UTC m=+123.724040243" watchObservedRunningTime="2026-01-20 09:10:13.697165696 +0000 UTC m=+123.865944226" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.700671 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.716522 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.746862 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.747266 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.247252738 +0000 UTC m=+124.416031268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.849619 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.850179 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.350155046 +0000 UTC m=+124.518933576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.954754 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.955475 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.455448158 +0000 UTC m=+124.624226688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.956516 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.068563 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.068915 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.568884077 +0000 UTC m=+124.737662607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.170961 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.171476 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.671454126 +0000 UTC m=+124.840232656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.271976 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.272159 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.772129274 +0000 UTC m=+124.940907804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.274879 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.275487 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.775468824 +0000 UTC m=+124.944247354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.364504 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55222: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.376708 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.377144 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.877120948 +0000 UTC m=+125.045899478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.423116 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55236: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.471108 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55240: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.478241 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.478754 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.978731061 +0000 UTC m=+125.147509591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.531069 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" event={"ID":"f7ec9898-6747-40af-be60-ce1289d0a4e6","Type":"ContainerStarted","Data":"b11fd6a2306cd411f0028b604e66a34704528f4af33e91a10d60a2bc82ede027"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.549501 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55248: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.570776 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"32c3c0ddd37a60d9857ab678812ca272a27fc659113c02ea581fe79c776141f2"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.572112 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" podStartSLOduration=103.572075723 podStartE2EDuration="1m43.572075723s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:14.57124483 +0000 UTC m=+124.740023360" watchObservedRunningTime="2026-01-20 09:10:14.572075723 +0000 UTC m=+124.740854253" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.582974 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.583592 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.08356352 +0000 UTC m=+125.252342050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.704707 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.706216 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.206195436 +0000 UTC m=+125.374973966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.706274 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55258: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.707841 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"1f08c0c08d46d56f5abfe3753b6bcdc1fa6d98aa4e44d81f7028c4bb52620059"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.734443 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"30197d2a8eba478c1cc1760f61a1263e6e709d83f8f8ebb93f86731179299136"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.736754 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.819661 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.821060 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.321027704 +0000 UTC m=+125.489806234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.921047 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.921425 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.421409784 +0000 UTC m=+125.590188314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.949988 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" event={"ID":"f41303d0-06e3-4554-8fa9-d9dd935d0bec","Type":"ContainerStarted","Data":"fdd3efc8c732127419bdb406d5c956bac0291772cb27fcf0bbd4840987a64dea"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.972606 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55260: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.008333 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" podStartSLOduration=104.008304753 podStartE2EDuration="1m44.008304753s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:14.77125345 +0000 UTC m=+124.940031980" watchObservedRunningTime="2026-01-20 09:10:15.008304753 +0000 UTC m=+125.177083283" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.010538 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" podStartSLOduration=104.010525822 podStartE2EDuration="1m44.010525822s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.00971423 +0000 UTC m=+125.178492760" watchObservedRunningTime="2026-01-20 09:10:15.010525822 +0000 UTC m=+125.179304352" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.026832 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.029192 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.529165692 +0000 UTC m=+125.697944222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.069330 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"85dcaade86f063dfd07dd8dd3838242dadcb7141d2e72c4d65bbea6d3df32cc6"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.069485 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.108361 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"ec20619374fc34db263286efbddbbf170e4ab13a8140da93ce8880910ca82771"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.123725 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" event={"ID":"21e183fd-a881-4f61-a726-bcaaf60e71d5","Type":"ContainerStarted","Data":"5b58a7173f5625d704260f3fd29fb7f952ca76d2e1fc3bf8c886b66d46366673"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.129083 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.135788 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.635747738 +0000 UTC m=+125.804526268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.146358 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" event={"ID":"d702c0ea-d2bd-41dc-9a3a-39caacbb288d","Type":"ContainerStarted","Data":"b966c232a6bba908a3cb408998b20eed2f0f64eb633e9680aa989c6a554d0a4c"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.147288 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.177288 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" event={"ID":"676675d9-dafb-4b30-ad88-bea33cf42ce0","Type":"ContainerStarted","Data":"94af2353f43c3f516b2f7b438b2db2e94e583cd7806c99cb9e1149867eab6b39"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.193971 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerStarted","Data":"91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.195131 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" podStartSLOduration=104.195115769 podStartE2EDuration="1m44.195115769s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.16755673 +0000 UTC m=+125.336335260" watchObservedRunningTime="2026-01-20 09:10:15.195115769 +0000 UTC m=+125.363894299" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.202375 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" podStartSLOduration=104.202348633 podStartE2EDuration="1m44.202348633s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.197219625 +0000 UTC m=+125.365998155" watchObservedRunningTime="2026-01-20 09:10:15.202348633 +0000 UTC m=+125.371127163" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.226751 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"fbeb17a228eed5edc217c90401e742e5b0c7e29b5cc6b24113e772348f8e37d9"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.230556 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.231814 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.731783352 +0000 UTC m=+125.900562062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.243250 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55270: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.243816 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"db58d0f502123e7bc044ec581bb2c8cb19c4c3d370def9804c1d6afe2300fc8e"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.256546 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" event={"ID":"ef29fedc-43ad-4cf5-b3ef-10a28c46842f","Type":"ContainerStarted","Data":"92f21792d1cd5d81e606078b9ae4b9cf5f3e41142ad1cfaa99ff73710e2b0061"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.284310 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-78z8z" event={"ID":"9aa837bd-63fc-4bb8-b158-d8632117a117","Type":"ContainerStarted","Data":"9d839f302bc858643c72edff27530af9683871acfdb2cc7ee62888ae0dec2fcf"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.331999 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.333434 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.833410365 +0000 UTC m=+126.002188895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.339337 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"fd7a42fa72c3427ad9620ef2052c0caea8c21b1957ce99460e6432583c26bcfa"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.381423 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" podStartSLOduration=104.381404172 podStartE2EDuration="1m44.381404172s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.291635735 +0000 UTC m=+125.460414265" watchObservedRunningTime="2026-01-20 09:10:15.381404172 +0000 UTC m=+125.550182702" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.384072 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" event={"ID":"f6477423-4b0a-43d7-9514-bde25388af77","Type":"ContainerStarted","Data":"2f6ba41ee6db11c7a43d43c2a79a711e54bad73e5e177b4c795f496c28b34516"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.416167 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" event={"ID":"0d738dd6-3c15-4131-837d-591792cb41cd","Type":"ContainerStarted","Data":"61ff3e55fc40df5d3c04cbeadd387ff02ac73b5771b6bc7863af5b8efb3e98f4"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.435176 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.435547 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.935506101 +0000 UTC m=+126.104284631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.435859 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.438258 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.938240784 +0000 UTC m=+126.107019314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.449817 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"a3c8492358b1e17a5b01ad3bdd46cc8aced54f44c93d0f320092b1db7b32253d"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.466356 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" podStartSLOduration=104.466333727 podStartE2EDuration="1m44.466333727s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.400423081 +0000 UTC m=+125.569201611" watchObservedRunningTime="2026-01-20 09:10:15.466333727 +0000 UTC m=+125.635112257" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.472529 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" podStartSLOduration=104.472508693 podStartE2EDuration="1m44.472508693s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.470416626 +0000 UTC m=+125.639195156" watchObservedRunningTime="2026-01-20 09:10:15.472508693 +0000 UTC m=+125.641287223" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.532259 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-78z8z" podStartSLOduration=104.532237563 podStartE2EDuration="1m44.532237563s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.530006124 +0000 UTC m=+125.698784654" watchObservedRunningTime="2026-01-20 09:10:15.532237563 +0000 UTC m=+125.701016093" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.539697 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.540046 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.040019902 +0000 UTC m=+126.208798432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.563925 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"e7d503df6c400b952a9f18f7d520f8669cddaf0336429554e035288fbb861dad"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.567147 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" podStartSLOduration=104.567134228 podStartE2EDuration="1m44.567134228s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.566333277 +0000 UTC m=+125.735111807" watchObservedRunningTime="2026-01-20 09:10:15.567134228 +0000 UTC m=+125.735912758" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.594506 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" event={"ID":"80f8b6d4-7eb4-42ec-9976-60dc6db3148f","Type":"ContainerStarted","Data":"a759b846b0f2fd42a045e7a86fb6f4efd76c300ac821a2741955c8437c88cf9e"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.595820 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.642648 5115 generic.go:358] "Generic (PLEG): container finished" podID="3b28944b-12d3-4087-b906-99fbf2937724" containerID="734e1601652462f7bd82995e493ba0a72c74f78c5482c86ba0be7444bba17e45" exitCode=0 Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.642814 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerDied","Data":"734e1601652462f7bd82995e493ba0a72c74f78c5482c86ba0be7444bba17e45"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.647626 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.648236 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.649196 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.149181657 +0000 UTC m=+126.317960187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.679928 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" podStartSLOduration=104.67988527 podStartE2EDuration="1m44.67988527s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.649881286 +0000 UTC m=+125.818659816" watchObservedRunningTime="2026-01-20 09:10:15.67988527 +0000 UTC m=+125.848663800" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.680331 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerStarted","Data":"b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.681311 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.700147 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55274: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.701139 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" podStartSLOduration=104.701108749 podStartE2EDuration="1m44.701108749s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.689324863 +0000 UTC m=+125.858103393" watchObservedRunningTime="2026-01-20 09:10:15.701108749 +0000 UTC m=+125.869887279" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.725845 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"593185d573871990da6dc3a956cd8bd9ff1270503cdef92e2909a86f8647f48f"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.733763 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.733849 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.734992 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" podStartSLOduration=104.734887534 podStartE2EDuration="1m44.734887534s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.734216936 +0000 UTC m=+125.902995466" watchObservedRunningTime="2026-01-20 09:10:15.734887534 +0000 UTC m=+125.903666064" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.735755 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.757782 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:15 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:15 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:15 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.757832 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758150 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758414 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.758640 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.25862046 +0000 UTC m=+126.427398990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758819 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.759218 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.259206685 +0000 UTC m=+126.427985215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.793497 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.858670 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podStartSLOduration=104.85865203 podStartE2EDuration="1m44.85865203s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.855625579 +0000 UTC m=+126.024404099" watchObservedRunningTime="2026-01-20 09:10:15.85865203 +0000 UTC m=+126.027430550" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.862726 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.864500 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.364477557 +0000 UTC m=+126.533256077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.965887 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.966421 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.466403448 +0000 UTC m=+126.635181978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.991880 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" podStartSLOduration=104.991853481 podStartE2EDuration="1m44.991853481s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.927169386 +0000 UTC m=+126.095947916" watchObservedRunningTime="2026-01-20 09:10:15.991853481 +0000 UTC m=+126.160632011" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.034492 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podStartSLOduration=105.034471732 podStartE2EDuration="1m45.034471732s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.033476705 +0000 UTC m=+126.202255235" watchObservedRunningTime="2026-01-20 09:10:16.034471732 +0000 UTC m=+126.203250262" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.070009 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.070253 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.570232391 +0000 UTC m=+126.739010921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.152005 5115 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-smr5d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.152095 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" podUID="d702c0ea-d2bd-41dc-9a3a-39caacbb288d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.171711 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.172110 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.67209674 +0000 UTC m=+126.840875270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.247045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.276684 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.776656963 +0000 UTC m=+126.945435493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.276552 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.277726 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.278093 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.7780733 +0000 UTC m=+126.946851830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.320328 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.328845 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.334799 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.338450 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387541 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387841 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387917 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387957 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.388079 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.888057958 +0000 UTC m=+127.056836488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.427389 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55278: no serving certificate available for the kubelet" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489037 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489082 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489235 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489741 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.490068 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.490132 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.990115522 +0000 UTC m=+127.158894052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.532241 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.538778 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.545885 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.546115 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.566920 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.592833 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.593031 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.09299945 +0000 UTC m=+127.261777980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593198 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593307 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593495 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593578 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.594002 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.093988277 +0000 UTC m=+127.262766807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.667399 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.694565 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.694881 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.695077 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.195039635 +0000 UTC m=+127.363818165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695487 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695578 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695694 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.696614 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.706383 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.714081 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.734693 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.735956 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.753251 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:16 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:16 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:16 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.753326 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.806553 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"e0f1eefe6ad27b2c5be50e40392f96025b89fb3e134d9e85311a28f373496130"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.806788 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807758 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807830 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807955 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.809469 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.309446391 +0000 UTC m=+127.478224911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.823503 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"3e31bf11180906f7b330777064934706cfb0c8c4a18f718f32ff9e3a8b0b8448"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.832392 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ft42n" event={"ID":"fbc48af4-261d-4599-a7fd-edd26b2b4022","Type":"ContainerStarted","Data":"5bf3f0836e17df2b3ed3402a2d5fbfb042d3679bf98612c415ec5630cc23305e"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.838496 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-59xcc" podStartSLOduration=10.838471118 podStartE2EDuration="10.838471118s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.83666146 +0000 UTC m=+127.005439990" watchObservedRunningTime="2026-01-20 09:10:16.838471118 +0000 UTC m=+127.007249648" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.848639 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"d695445bedef5f16dfd39f8315a548a1726ead3a0d76056cf7bc7035efb0c47a"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.848719 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"03e85f1250a159644682d8c2988a07c749e0197930f9f9a9280d8cc1cb25fe8c"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.857268 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.860263 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"208f62729a2edf66180ea82cd91b6e6bc5090360ae7cd4eef33cf055d1f09245"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.883270 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" podStartSLOduration=105.883247488 podStartE2EDuration="1m45.883247488s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.882596321 +0000 UTC m=+127.051374851" watchObservedRunningTime="2026-01-20 09:10:16.883247488 +0000 UTC m=+127.052026018" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.888254 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"f4b2c38d2426bfd844921a4b04717cc1b1b784afe9d058de47d652b81bd68872"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.889393 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.910493 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912352 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912777 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912827 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912909 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.913180 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.413122439 +0000 UTC m=+127.581900979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.913319 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.915528 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.935334 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"c719c8a450ed77e0000f58d23c7588dd7e5f8bb38a0115a9a2984f9aa9f5bbab"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.947403 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.965888 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.983453 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ft42n" podStartSLOduration=11.983429293 podStartE2EDuration="11.983429293s" podCreationTimestamp="2026-01-20 09:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.983119545 +0000 UTC m=+127.151898075" watchObservedRunningTime="2026-01-20 09:10:16.983429293 +0000 UTC m=+127.152207823" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.012824 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" event={"ID":"ecb1b469-4758-499e-a0ba-8204058552be","Type":"ContainerStarted","Data":"623ef71af5aaa936c2b34250ed6bfeabb18db8f3cd11fb770c90a6c98f43618f"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.013974 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.017363 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.517339922 +0000 UTC m=+127.686118442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.039002 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"f9734becf9da70049daba053ac14471c6d41b24eee9735cc9ae0bb10bf63500f"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.043543 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"580ba2b077fd50f319404f9b893158cc5f4bbdbcee8233b368fbf311b1e7dd7d"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.055601 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.092152 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.114447 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tzrjx" podStartSLOduration=106.114417683 podStartE2EDuration="1m46.114417683s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.088610982 +0000 UTC m=+127.257389512" watchObservedRunningTime="2026-01-20 09:10:17.114417683 +0000 UTC m=+127.283196213" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.122111 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.123233 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerStarted","Data":"5e106c2a534c2832eb7b6fe6cc406cf531006613c40446e80e9b15a58be900c0"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152197 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152563 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152638 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152658 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.152779 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.652756471 +0000 UTC m=+127.821535001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.164093 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" podStartSLOduration=106.164070174 podStartE2EDuration="1m46.164070174s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.160824396 +0000 UTC m=+127.329602926" watchObservedRunningTime="2026-01-20 09:10:17.164070174 +0000 UTC m=+127.332848704" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.170004 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"e135243144f39f667f48060809952423e9baf250db9ce7fbeac18b53368c199e"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.170358 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"aebedec17e42fd5419092403fcaf894225a0a1e0062fb7d78784967ec979f31d"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.178413 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"9f3bccd5b0f20ddbd7177017144088df3498ba8358f0134c9b7a7de81336524c"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.197253 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" podStartSLOduration=106.197225702 podStartE2EDuration="1m46.197225702s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.195939708 +0000 UTC m=+127.364718238" watchObservedRunningTime="2026-01-20 09:10:17.197225702 +0000 UTC m=+127.366004242" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.214173 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerStarted","Data":"681c85f143fd196233b8af99153dc4afaefb32d23343907d3f47bcdc3bc17dc8"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.214230 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.227443 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" gracePeriod=30 Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.229308 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.265272 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" podStartSLOduration=106.265251635 podStartE2EDuration="1m46.265251635s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.222411007 +0000 UTC m=+127.391189537" watchObservedRunningTime="2026-01-20 09:10:17.265251635 +0000 UTC m=+127.434030165" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266527 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266843 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.267191 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.272986 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.285472 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.785452746 +0000 UTC m=+127.954231276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.291147 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.310007 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" podStartSLOduration=106.309985694 podStartE2EDuration="1m46.309985694s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.265489122 +0000 UTC m=+127.434267652" watchObservedRunningTime="2026-01-20 09:10:17.309985694 +0000 UTC m=+127.478764224" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.312312 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" podStartSLOduration=106.312305057 podStartE2EDuration="1m46.312305057s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.309067429 +0000 UTC m=+127.477845959" watchObservedRunningTime="2026-01-20 09:10:17.312305057 +0000 UTC m=+127.481083577" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.323548 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.370773 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.371178 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.871156584 +0000 UTC m=+128.039935114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.396023 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" podStartSLOduration=106.395995709 podStartE2EDuration="1m46.395995709s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.345402993 +0000 UTC m=+127.514181523" watchObservedRunningTime="2026-01-20 09:10:17.395995709 +0000 UTC m=+127.564774229" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.401369 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" podStartSLOduration=106.401352202 podStartE2EDuration="1m46.401352202s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.379856237 +0000 UTC m=+127.548634757" watchObservedRunningTime="2026-01-20 09:10:17.401352202 +0000 UTC m=+127.570130752" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.428084 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" podStartSLOduration=106.428059458 podStartE2EDuration="1m46.428059458s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.423288261 +0000 UTC m=+127.592066791" watchObservedRunningTime="2026-01-20 09:10:17.428059458 +0000 UTC m=+127.596837998" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.476714 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.477210 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.977197035 +0000 UTC m=+128.145975565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.515378 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.532473 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.577783 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.578126 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.07810574 +0000 UTC m=+128.246884270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.620105 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.679467 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.680104 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.180068932 +0000 UTC m=+128.348847462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.738031 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.749840 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:17 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:17 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:17 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.749931 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.782192 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.782589 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.282563659 +0000 UTC m=+128.451342189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.821587 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55280: no serving certificate available for the kubelet" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.884233 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.884666 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.384649624 +0000 UTC m=+128.553428154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.988597 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.989106 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.489068623 +0000 UTC m=+128.657847153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.989683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.990055 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.490048339 +0000 UTC m=+128.658826859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.091095 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.091346 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.591306022 +0000 UTC m=+128.760084552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.091921 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.092314 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.592294799 +0000 UTC m=+128.761073329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.157714 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.193616 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.193876 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.69383961 +0000 UTC m=+128.862618140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224647 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"b623557fb8fa89838a7fffcb0c7e471eeaf77057e10e543a3504832324b27404"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224713 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerStarted","Data":"91ffd30d0b07fe8b71ba5e2b62abd0321e935c136baf579cb7b5b85fbfc8da21"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224731 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerStarted","Data":"ba3c29f3ff3951d423c587bfc54fde3036fb68c70ae8bcabcb0199b3d1a764a2"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224743 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"092aa312ded9179826cf1c7718d79766d577bbc74bfdc3260b75b3acb73e6544"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.295996 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.296393 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.796377219 +0000 UTC m=+128.965155749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.303938 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.314252 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.333285 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.380232 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.401957 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.403207 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.90316584 +0000 UTC m=+129.071944480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408011 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408283 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408585 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.414663 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.914641177 +0000 UTC m=+129.083419707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510395 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510584 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510689 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.511660 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.511656 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.511760 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.011727059 +0000 UTC m=+129.180505579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.562121 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.612128 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.612700 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.112669194 +0000 UTC m=+129.281447724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.707238 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.713861 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.714036 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.21399822 +0000 UTC m=+129.382776750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.714517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.715113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.215089659 +0000 UTC m=+129.383868189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.716817 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.718151 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.734139 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:18 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:18 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:18 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.734235 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.815859 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816094 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816145 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816226 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.816367 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.316340942 +0000 UTC m=+129.485119472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.852346 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918135 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918159 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918208 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.919207 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.919427 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.919522 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.419507267 +0000 UTC m=+129.588285797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.963971 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.020160 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.020416 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.5203736 +0000 UTC m=+129.689152130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.020854 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.021034 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.021488 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.521468539 +0000 UTC m=+129.690247069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.029520 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.032457 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.034836 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.039771 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.115783 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122495 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.122679 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.622649101 +0000 UTC m=+129.791427631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122804 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122931 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.123309 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.623299108 +0000 UTC m=+129.792077638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: W0120 09:10:19.124288 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d4e242_d348_4f3f_8453_612b19e41f3a.slice/crio-50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944 WatchSource:0}: Error finding container 50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944: Status 404 returned error can't find the container with id 50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.137216 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.223829 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224032 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224065 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.224349 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.724327255 +0000 UTC m=+129.893105785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224397 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.245857 5115 generic.go:358] "Generic (PLEG): container finished" podID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerID="91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.246080 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerDied","Data":"91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.249983 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.250122 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.251690 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.256872 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.256967 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.280161 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.280926 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.293267 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerStarted","Data":"50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.308810 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.310646 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.330574 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.331003 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.830988604 +0000 UTC m=+129.999767134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.366855 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.431825 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.432050 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.932007652 +0000 UTC m=+130.100786182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.445622 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.446143 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.460694 5115 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-xn6qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]log ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]etcd ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/max-in-flight-filter ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 09:10:19 crc kubenswrapper[5115]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/openshift.io-startinformers ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 09:10:19 crc kubenswrapper[5115]: livez check failed Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.460994 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" podUID="72f63421-cfe9-45f8-85fe-b779a81a7ebb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.489657 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.489755 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.501355 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.510484 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.510663 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.513166 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.517639 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.523223 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:19 crc kubenswrapper[5115]: W0120 09:10:19.527136 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b758f72_1c19_45ea_8f26_580952f254a6.slice/crio-d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a WatchSource:0}: Error finding container d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a: Status 404 returned error can't find the container with id d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.531612 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.533943 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.534884 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.535960 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.035945287 +0000 UTC m=+130.204723817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585190 5115 patch_prober.go:28] interesting pod/console-64d44f6ddf-78z8z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585215 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585261 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-78z8z" podUID="9aa837bd-63fc-4bb8-b158-d8632117a117" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585308 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637339 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637523 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637679 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637782 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.638857 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.138825963 +0000 UTC m=+130.307604493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.737232 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:19 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.737359 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.742815 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.743569 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.746839 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.744296 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747539 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747634 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.748306 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.248281736 +0000 UTC m=+130.417060266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.772327 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.829499 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.850184 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.850623 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.350601959 +0000 UTC m=+130.519380489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.919553 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.927944 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.937302 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.956502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.956946 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.456930238 +0000 UTC m=+130.625708768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058161 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058765 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058802 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058833 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.059032 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.558981833 +0000 UTC m=+130.727760373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.139366 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:20 crc kubenswrapper[5115]: W0120 09:10:20.151079 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57355d9d_a14f_4cf0_8a63_842b27765063.slice/crio-2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017 WatchSource:0}: Error finding container 2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017: Status 404 returned error can't find the container with id 2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.159866 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160052 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160169 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.160483 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.660468203 +0000 UTC m=+130.829246733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160607 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160634 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.186169 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.256469 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.261855 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.262766 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.762741924 +0000 UTC m=+130.931520454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.319742 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.319799 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322484 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da" exitCode=0 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322654 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322679 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerStarted","Data":"d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.328360 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"4d9ad503e31517d22d202d7525f5c2ff549e311ae1997fc22f3fe1f8e1bcd594"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.331574 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7" exitCode=0 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.331705 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.336957 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerStarted","Data":"b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.342179 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.363978 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.364371 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.864357766 +0000 UTC m=+131.033136296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.433379 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55296: no serving certificate available for the kubelet" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.471119 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.472679 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.972652849 +0000 UTC m=+131.141431379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.573075 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.573533 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.073513832 +0000 UTC m=+131.242292362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.674637 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.675127 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.175104984 +0000 UTC m=+131.343883514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.722262 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.729331 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.742034 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:20 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:20 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:20 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.742115 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.782589 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.782937 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.783240 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.783401 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.783777 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.283756426 +0000 UTC m=+131.452534966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.787258 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume" (OuterVolumeSpecName: "config-volume") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.796431 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.815175 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.830312 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z" (OuterVolumeSpecName: "kube-api-access-xqt2z") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "kube-api-access-xqt2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.885344 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.885613 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.385572855 +0000 UTC m=+131.554351385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886493 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886521 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886532 5115 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.988234 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.988772 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.488754519 +0000 UTC m=+131.657533049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.089023 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.089397 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.589349085 +0000 UTC m=+131.758127615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.089770 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.090299 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.59027482 +0000 UTC m=+131.759053350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.190995 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.191227 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.691183024 +0000 UTC m=+131.859961554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.191493 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.191958 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.691934845 +0000 UTC m=+131.860713375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.293844 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.294201 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.794156834 +0000 UTC m=+131.962935554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.294788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.297048 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.797022821 +0000 UTC m=+131.965801351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.326625 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356382 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerDied","Data":"08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356452 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356559 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.366471 5115 generic.go:358] "Generic (PLEG): container finished" podID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerID="7be14e15da24a69df8084edf6f9152bf1adbc9a0753cde445072e14def02ab96" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.366614 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerDied","Data":"7be14e15da24a69df8084edf6f9152bf1adbc9a0753cde445072e14def02ab96"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.368771 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.368982 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.391803 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.392514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.392578 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"523e078e78e6cfb054a40a6916767e994deee00e08213d3cb61f49d65fa63001"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.396787 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.397206 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.897177885 +0000 UTC m=+132.065956415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.397356 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.397648 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.897641557 +0000 UTC m=+132.066420087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.498819 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.499819 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.999800094 +0000 UTC m=+132.168578624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.600389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.601007 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.100980727 +0000 UTC m=+132.269759257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.703017 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.2029759 +0000 UTC m=+132.371754430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.702978 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.703521 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.703953 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.203937055 +0000 UTC m=+132.372715585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.730162 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:21 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:21 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:21 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.730240 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.805098 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.805352 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.305305812 +0000 UTC m=+132.474084382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.805743 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.806541 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.306532515 +0000 UTC m=+132.475311045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.907055 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.907318 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.407279604 +0000 UTC m=+132.576058134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.907480 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.908022 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.408003604 +0000 UTC m=+132.576782134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.009105 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.009829 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.509802143 +0000 UTC m=+132.678580673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.010156 5115 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.111855 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.114265 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.614250141 +0000 UTC m=+132.783028671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.213275 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.213788 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.713758928 +0000 UTC m=+132.882537458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.296259 5115 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T09:10:22.010195143Z","UUID":"9eb799e7-b499-4908-bf21-fcb198d19ef3","Handler":null,"Name":"","Endpoint":""} Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.301508 5115 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.301551 5115 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.316149 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.322226 5115 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.322269 5115 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.413506 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.419767 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"484a181d692c0d02e1303d457c80939b89ab87a2400e20ec44047fa6277be2ca"} Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.496497 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.504384 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.523921 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.565060 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.681538 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727123 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727295 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727593 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" (UID: "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.732965 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:22 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:22 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:22 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.733028 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.738830 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" (UID: "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.830660 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.830711 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.913225 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:22 crc kubenswrapper[5115]: W0120 09:10:22.925395 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580c8ecd_e9bb_4c33_aeb2_f304adb8119c.slice/crio-d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717 WatchSource:0}: Error finding container d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717: Status 404 returned error can't find the container with id d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717 Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.436952 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"d0e95563e64cf343471c4ee061cce2808083b14923444ba8c0967cdfb0ae61c2"} Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443356 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerDied","Data":"b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4"} Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443413 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443410 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.445580 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerStarted","Data":"d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717"} Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.527644 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.531792 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.534784 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.534845 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.730220 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:23 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:23 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:23 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.730317 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.040809 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041456 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041469 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041487 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041493 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041600 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041614 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.244851 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.245574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.248307 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.250279 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.279359 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.306721 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.356081 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.359360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.451356 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.457878 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.460541 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.461154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.460726 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.462501 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"f0e034b3778ca9b13cc062038f8c0b3384de2102bc4b55c42742e4878f817854"} Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.497836 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.506626 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" podStartSLOduration=18.506591621 podStartE2EDuration="18.506591621s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:24.50131293 +0000 UTC m=+134.670091470" watchObservedRunningTime="2026-01-20 09:10:24.506591621 +0000 UTC m=+134.675370161" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.582942 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.741408 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:24 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:24 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:24 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.741932 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.078802 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:25 crc kubenswrapper[5115]: W0120 09:10:25.087987 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfed085de_0c46_4008_90d3_73bfbbbd98e5.slice/crio-6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7 WatchSource:0}: Error finding container 6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7: Status 404 returned error can't find the container with id 6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7 Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.472471 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerStarted","Data":"6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7"} Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.474666 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerStarted","Data":"658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca"} Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.475388 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.499043 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-b674j" podStartSLOduration=114.499024033 podStartE2EDuration="1m54.499024033s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:25.494607465 +0000 UTC m=+135.663385995" watchObservedRunningTime="2026-01-20 09:10:25.499024033 +0000 UTC m=+135.667802563" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.586257 5115 ???:1] "http: TLS handshake error from 192.168.126.11:56408: no serving certificate available for the kubelet" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.731518 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:25 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:25 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:25 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.731612 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.756240 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.486073 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerStarted","Data":"0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a"} Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.507612 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.507584035 podStartE2EDuration="2.507584035s" podCreationTimestamp="2026-01-20 09:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:26.501726488 +0000 UTC m=+136.670505018" watchObservedRunningTime="2026-01-20 09:10:26.507584035 +0000 UTC m=+136.676362565" Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.729402 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:26 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:26 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:26 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.729728 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:27 crc kubenswrapper[5115]: I0120 09:10:27.729095 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:27 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:27 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:27 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:27 crc kubenswrapper[5115]: I0120 09:10:27.729194 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.506938 5115 generic.go:358] "Generic (PLEG): container finished" podID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerID="0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a" exitCode=0 Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.507145 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerDied","Data":"0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a"} Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.729066 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:28 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:28 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:28 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.729162 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.532380 5115 patch_prober.go:28] interesting pod/console-64d44f6ddf-78z8z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.532483 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-78z8z" podUID="9aa837bd-63fc-4bb8-b158-d8632117a117" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.729324 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:29 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:29 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:29 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.729409 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:30 crc kubenswrapper[5115]: I0120 09:10:30.729388 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:30 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:30 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:30 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:30 crc kubenswrapper[5115]: I0120 09:10:30.729500 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.731005 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:31 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:31 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:31 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.731636 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.847018 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:10:32 crc kubenswrapper[5115]: I0120 09:10:32.729595 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:32 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:32 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:32 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:32 crc kubenswrapper[5115]: I0120 09:10:32.729709 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.528167 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.530290 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.531477 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.531524 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:33 crc kubenswrapper[5115]: I0120 09:10:33.729650 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:33 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:33 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:33 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:33 crc kubenswrapper[5115]: I0120 09:10:33.729738 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:34 crc kubenswrapper[5115]: I0120 09:10:34.729672 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:34 crc kubenswrapper[5115]: I0120 09:10:34.734641 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:35 crc kubenswrapper[5115]: I0120 09:10:35.851676 5115 ???:1] "http: TLS handshake error from 192.168.126.11:40232: no serving certificate available for the kubelet" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.095492 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.096491 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" containerID="cri-o://883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" gracePeriod=30 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.159301 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.159612 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" containerID="cri-o://b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" gracePeriod=30 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.325501 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.502830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"fed085de-0c46-4008-90d3-73bfbbbd98e5\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.502925 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"fed085de-0c46-4008-90d3-73bfbbbd98e5\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.503390 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fed085de-0c46-4008-90d3-73bfbbbd98e5" (UID: "fed085de-0c46-4008-90d3-73bfbbbd98e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.517227 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fed085de-0c46-4008-90d3-73bfbbbd98e5" (UID: "fed085de-0c46-4008-90d3-73bfbbbd98e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.574672 5115 generic.go:358] "Generic (PLEG): container finished" podID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerID="883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" exitCode=0 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.575265 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerDied","Data":"883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576846 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerDied","Data":"6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576886 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576938 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.579873 5115 generic.go:358] "Generic (PLEG): container finished" podID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerID="b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" exitCode=0 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.579967 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerDied","Data":"b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.604686 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.604735 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:39 crc kubenswrapper[5115]: I0120 09:10:39.538262 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:39 crc kubenswrapper[5115]: I0120 09:10:39.545045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:42 crc kubenswrapper[5115]: I0120 09:10:42.988580 5115 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-lg8fb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 20 09:10:42 crc kubenswrapper[5115]: I0120 09:10:42.989249 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.528645 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.532024 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.534325 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.534407 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:45 crc kubenswrapper[5115]: I0120 09:10:45.682088 5115 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-jxpqr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 20 09:10:45 crc kubenswrapper[5115]: I0120 09:10:45.683184 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 20 09:10:46 crc kubenswrapper[5115]: I0120 09:10:46.493766 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:47 crc kubenswrapper[5115]: I0120 09:10:47.692334 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.316138 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667702 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667779 5115 generic.go:358] "Generic (PLEG): container finished" podID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" exitCode=137 Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667842 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerDied","Data":"fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.366236 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.366648 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419190 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419430 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419544 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419575 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420058 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420074 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready" (OuterVolumeSpecName: "ready") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420881 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.431882 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw" (OuterVolumeSpecName: "kube-api-access-x65zw") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "kube-api-access-x65zw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522934 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522974 5115 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522984 5115 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522994 5115 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678263 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerDied","Data":"857692043d4e2a0e52ae73c61d049790e037f8377cfd4c3084e2ea0725ae7c00"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678582 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678629 5115 scope.go:117] "RemoveContainer" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.684432 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.698575 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.706013 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.709368 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.754846 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.762110 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829437 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829515 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829616 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829660 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829762 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.834392 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835623 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835645 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835659 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835666 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835689 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835696 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835860 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835875 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835884 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.838785 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.843990 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config" (OuterVolumeSpecName: "config") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.846416 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp" (OuterVolumeSpecName: "tmp") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.853023 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.853255 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.854101 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf" (OuterVolumeSpecName: "kube-api-access-cj2cf") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "kube-api-access-cj2cf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.856358 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933619 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933647 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933735 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933767 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933819 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933832 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933844 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933855 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933865 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035033 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035122 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035208 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035242 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.036380 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.036929 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.037278 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.044376 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.060435 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.203237 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.204714 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.228968 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229695 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229718 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229820 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.242338 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.242519 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339331 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339698 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339753 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339841 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339859 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340004 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340045 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340061 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340078 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340135 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340151 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.342334 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp" (OuterVolumeSpecName: "tmp") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.342817 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.343012 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca" (OuterVolumeSpecName: "client-ca") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.343289 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config" (OuterVolumeSpecName: "config") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.355292 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.362662 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts" (OuterVolumeSpecName: "kube-api-access-697ts") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "kube-api-access-697ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.439488 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.441703 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.441930 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442223 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442283 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442301 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442328 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442451 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442461 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442471 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442486 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442494 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442504 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.443204 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.443971 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.444533 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.444877 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.448588 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.458504 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: W0120 09:10:51.479260 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6822615_2e54_40b4_a17f_9d5fb26e31db.slice/crio-ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88 WatchSource:0}: Error finding container ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88: Status 404 returned error can't find the container with id ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.589201 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.729768 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.729861 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.733702 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.734070 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736207 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerDied","Data":"11a76b2995d1e7821d8b5caa00d0b12a5012c7b092dc0a7b36b27b7457c6f577"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736266 5115 scope.go:117] "RemoveContainer" containerID="883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736438 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.744239 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.744317 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.748298 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.748386 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.766712 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.766863 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.780563 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.780586 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerDied","Data":"5fb596da1738dbe8416b2b3a595dc262a4288da61aa3303a2ea6eb0db0479d63"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.781648 5115 scope.go:117] "RemoveContainer" containerID="b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.793643 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.793747 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.801160 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.801439 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.807639 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerStarted","Data":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.807702 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerStarted","Data":"ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.808578 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.822377 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.822655 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.856294 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: W0120 09:10:51.874866 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88667356_ca96_429b_a986_2018168d5da2.slice/crio-bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f WatchSource:0}: Error finding container bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f: Status 404 returned error can't find the container with id bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.886421 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.897252 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.902027 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.904948 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.919697 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" podStartSLOduration=14.919666294 podStartE2EDuration="14.919666294s" podCreationTimestamp="2026-01-20 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:51.914102435 +0000 UTC m=+162.082880965" watchObservedRunningTime="2026-01-20 09:10:51.919666294 +0000 UTC m=+162.088444824" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.224442 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" path="/var/lib/kubelet/pods/4d93cff2-21b0-4fcb-b899-b6efe5a56822/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.225756 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" path="/var/lib/kubelet/pods/664dc1e9-b220-4dd9-8576-b5798850bc57/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.226432 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" path="/var/lib/kubelet/pods/b39cc292-22ad-4fb0-9d3f-6467c81680eb/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.696980 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.832188 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerStarted","Data":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.834783 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.837603 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerStarted","Data":"094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.841528 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.845502 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerStarted","Data":"fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.848120 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.850299 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.854410 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerStarted","Data":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.856554 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerStarted","Data":"787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.856598 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerStarted","Data":"bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.859228 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrnvw" podStartSLOduration=5.700094382 podStartE2EDuration="36.85920782s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.258009508 +0000 UTC m=+129.426788038" lastFinishedPulling="2026-01-20 09:10:50.417122946 +0000 UTC m=+160.585901476" observedRunningTime="2026-01-20 09:10:52.851880404 +0000 UTC m=+163.020658944" watchObservedRunningTime="2026-01-20 09:10:52.85920782 +0000 UTC m=+163.027986350" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.895467 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5plkc" podStartSLOduration=4.893027077 podStartE2EDuration="34.89544151s" podCreationTimestamp="2026-01-20 09:10:18 +0000 UTC" firstStartedPulling="2026-01-20 09:10:20.332881573 +0000 UTC m=+130.501660103" lastFinishedPulling="2026-01-20 09:10:50.335295996 +0000 UTC m=+160.504074536" observedRunningTime="2026-01-20 09:10:52.872609198 +0000 UTC m=+163.041387748" watchObservedRunningTime="2026-01-20 09:10:52.89544151 +0000 UTC m=+163.064220040" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.931956 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5s99" podStartSLOduration=4.869077975 podStartE2EDuration="34.931934246s" podCreationTimestamp="2026-01-20 09:10:18 +0000 UTC" firstStartedPulling="2026-01-20 09:10:20.323560493 +0000 UTC m=+130.492339023" lastFinishedPulling="2026-01-20 09:10:50.386416764 +0000 UTC m=+160.555195294" observedRunningTime="2026-01-20 09:10:52.930255601 +0000 UTC m=+163.099034131" watchObservedRunningTime="2026-01-20 09:10:52.931934246 +0000 UTC m=+163.100712776" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.932210 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vv5qk" podStartSLOduration=4.94167652 podStartE2EDuration="33.932203833s" podCreationTimestamp="2026-01-20 09:10:19 +0000 UTC" firstStartedPulling="2026-01-20 09:10:21.393668001 +0000 UTC m=+131.562446531" lastFinishedPulling="2026-01-20 09:10:50.384195314 +0000 UTC m=+160.552973844" observedRunningTime="2026-01-20 09:10:52.898505912 +0000 UTC m=+163.067284442" watchObservedRunningTime="2026-01-20 09:10:52.932203833 +0000 UTC m=+163.100982363" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.949826 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dlnj" podStartSLOduration=5.897830594 podStartE2EDuration="36.949799533s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.310502875 +0000 UTC m=+129.479281405" lastFinishedPulling="2026-01-20 09:10:50.362471814 +0000 UTC m=+160.531250344" observedRunningTime="2026-01-20 09:10:52.948270903 +0000 UTC m=+163.117049433" watchObservedRunningTime="2026-01-20 09:10:52.949799533 +0000 UTC m=+163.118578083" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.972865 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-45pv6" podStartSLOduration=4.927397776 podStartE2EDuration="33.97284215s" podCreationTimestamp="2026-01-20 09:10:19 +0000 UTC" firstStartedPulling="2026-01-20 09:10:21.369835852 +0000 UTC m=+131.538614382" lastFinishedPulling="2026-01-20 09:10:50.415280226 +0000 UTC m=+160.584058756" observedRunningTime="2026-01-20 09:10:52.970765574 +0000 UTC m=+163.139544104" watchObservedRunningTime="2026-01-20 09:10:52.97284215 +0000 UTC m=+163.141620670" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.992727 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ln8lc" podStartSLOduration=5.911964232 podStartE2EDuration="36.992705562s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.281413085 +0000 UTC m=+129.450191615" lastFinishedPulling="2026-01-20 09:10:50.362154415 +0000 UTC m=+160.530932945" observedRunningTime="2026-01-20 09:10:52.987173114 +0000 UTC m=+163.155951654" watchObservedRunningTime="2026-01-20 09:10:52.992705562 +0000 UTC m=+163.161484092" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.041043 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" podStartSLOduration=16.041025115 podStartE2EDuration="16.041025115s" podCreationTimestamp="2026-01-20 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:53.013216441 +0000 UTC m=+163.181994971" watchObservedRunningTime="2026-01-20 09:10:53.041025115 +0000 UTC m=+163.209803645" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.043137 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn6h9" podStartSLOduration=5.910013619 podStartE2EDuration="37.043129711s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.251091583 +0000 UTC m=+129.419870113" lastFinishedPulling="2026-01-20 09:10:50.384207675 +0000 UTC m=+160.552986205" observedRunningTime="2026-01-20 09:10:53.038810685 +0000 UTC m=+163.207589215" watchObservedRunningTime="2026-01-20 09:10:53.043129711 +0000 UTC m=+163.211908231" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.862466 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.868429 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.340828 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.379330 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.380091 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" containerID="cri-o://8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" gracePeriod=30 Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.758526 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.790694 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805260 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805295 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805459 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.851621 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.851811 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.878754 5115 generic.go:358] "Generic (PLEG): container finished" podID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" exitCode=0 Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879604 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879710 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerDied","Data":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879756 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerDied","Data":"ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88"} Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879783 5115 scope.go:117] "RemoveContainer" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.904068 5115 scope.go:117] "RemoveContainer" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: E0120 09:10:55.904639 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": container with ID starting with 8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319 not found: ID does not exist" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.904685 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} err="failed to get container status \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": rpc error: code = NotFound desc = could not find container \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": container with ID starting with 8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319 not found: ID does not exist" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912660 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912732 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912797 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912922 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.913281 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp" (OuterVolumeSpecName: "tmp") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.913998 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.914035 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config" (OuterVolumeSpecName: "config") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.921362 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.922909 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq" (OuterVolumeSpecName: "kube-api-access-hd9fq") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "kube-api-access-hd9fq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015225 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015291 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016242 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016667 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016774 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016871 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016995 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.017089 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.118508 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119131 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119167 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119244 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119267 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.120434 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.120780 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.121566 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.132098 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.138546 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.167884 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.216321 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.225768 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.369879 5115 ???:1] "http: TLS handshake error from 192.168.126.11:37614: no serving certificate available for the kubelet" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.641810 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.668928 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.668982 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.868136 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.893622 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerStarted","Data":"ecc488089e2907ad65741a46b809cf94a5a4a9b7392b79f53726c2b0b4d5c94f"} Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.911628 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.911702 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.961098 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.123491 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.123566 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.161677 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.247010 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.397668 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" containerID="cri-o://787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" gracePeriod=30 Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674778 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674825 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674843 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674886 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675029 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675246 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675357 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.677257 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.679559 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.680339 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.843842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.844416 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.900474 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerStarted","Data":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.902294 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.904478 5115 generic.go:358] "Generic (PLEG): container finished" podID="88667356-ca96-429b-a986-2018168d5da2" containerID="787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" exitCode=0 Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.905526 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerDied","Data":"787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64"} Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.921792 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" podStartSLOduration=2.921766562 podStartE2EDuration="2.921766562s" podCreationTimestamp="2026-01-20 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:57.91979574 +0000 UTC m=+168.088574270" watchObservedRunningTime="2026-01-20 09:10:57.921766562 +0000 UTC m=+168.090545102" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945612 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945723 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945817 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.970038 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.975767 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.993329 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.226074 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" path="/var/lib/kubelet/pods/f6822615-2e54-40b4-a17f-9d5fb26e31db/volumes" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.308238 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.418352 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.537593 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.680122 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.720612 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721644 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721660 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721855 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.768018 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.780177 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.854018 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.854073 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866423 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866571 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866647 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866752 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866822 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866949 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867109 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867143 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867181 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867211 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867284 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867354 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867910 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868200 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp" (OuterVolumeSpecName: "tmp") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868646 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config" (OuterVolumeSpecName: "config") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868771 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca" (OuterVolumeSpecName: "client-ca") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.872926 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn" (OuterVolumeSpecName: "kube-api-access-l4kgn") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "kube-api-access-l4kgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.872939 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.901698 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.916529 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.917142 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerDied","Data":"bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f"} Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.917193 5115 scope.go:117] "RemoveContainer" containerID="787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.919461 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerStarted","Data":"3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a"} Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.956670 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.960970 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969033 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969120 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969186 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969215 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969259 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969326 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969343 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969359 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969375 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969394 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969409 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.971653 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.972337 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.974030 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.974434 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.975416 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.977959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.988005 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.099161 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.138351 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.138404 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.190246 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.529686 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:59 crc kubenswrapper[5115]: W0120 09:10:59.537112 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3904fe4_fb4d_4794_8d28_a76e420c437f.slice/crio-c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056 WatchSource:0}: Error finding container c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056: Status 404 returned error can't find the container with id c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.831660 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.832240 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.887693 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.936672 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerStarted","Data":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.936733 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerStarted","Data":"c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.937105 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.938356 5115 generic.go:358] "Generic (PLEG): container finished" podID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerID="e6103a0933a658cea6904c3a48521826045b7fe22397fb3db0c7bb8cc7460e00" exitCode=0 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.938954 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerDied","Data":"e6103a0933a658cea6904c3a48521826045b7fe22397fb3db0c7bb8cc7460e00"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.939097 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn6h9" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" containerID="cri-o://310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" gracePeriod=2 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.958593 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" podStartSLOduration=4.958560834 podStartE2EDuration="4.958560834s" podCreationTimestamp="2026-01-20 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:59.957363202 +0000 UTC m=+170.126141752" watchObservedRunningTime="2026-01-20 09:10:59.958560834 +0000 UTC m=+170.127339384" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.988116 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.002741 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.109258 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.187175 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.230183 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88667356-ca96-429b-a986-2018168d5da2" path="/var/lib/kubelet/pods/88667356-ca96-429b-a986-2018168d5da2/volumes" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.258260 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.259089 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.314254 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.476703 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611085 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611186 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.613167 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities" (OuterVolumeSpecName: "utilities") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.620371 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx" (OuterVolumeSpecName: "kube-api-access-fp6vx") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "kube-api-access-fp6vx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.660103 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713519 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713553 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713563 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948359 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" exitCode=0 Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948503 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948577 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"91ffd30d0b07fe8b71ba5e2b62abd0321e935c136baf579cb7b5b85fbfc8da21"} Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948607 5115 scope.go:117] "RemoveContainer" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948803 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.006285 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ln8lc" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" containerID="cri-o://262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" gracePeriod=2 Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.069833 5115 scope.go:117] "RemoveContainer" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.070188 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.097925 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.109471 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.126579 5115 scope.go:117] "RemoveContainer" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.157466 5115 scope.go:117] "RemoveContainer" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.159147 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": container with ID starting with 310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74 not found: ID does not exist" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.159179 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} err="failed to get container status \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": rpc error: code = NotFound desc = could not find container \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": container with ID starting with 310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74 not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.159208 5115 scope.go:117] "RemoveContainer" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.160875 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": container with ID starting with 288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036 not found: ID does not exist" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.160929 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036"} err="failed to get container status \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": rpc error: code = NotFound desc = could not find container \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": container with ID starting with 288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036 not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.160944 5115 scope.go:117] "RemoveContainer" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.161940 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": container with ID starting with cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a not found: ID does not exist" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.161958 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a"} err="failed to get container status \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": rpc error: code = NotFound desc = could not find container \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": container with ID starting with cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.237665 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321144 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"5e702626-2df6-4412-a9e4-9b6046e5d143\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321249 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"5e702626-2df6-4412-a9e4-9b6046e5d143\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321263 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e702626-2df6-4412-a9e4-9b6046e5d143" (UID: "5e702626-2df6-4412-a9e4-9b6046e5d143"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321550 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.329764 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e702626-2df6-4412-a9e4-9b6046e5d143" (UID: "5e702626-2df6-4412-a9e4-9b6046e5d143"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.423127 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.961683 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.961714 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerDied","Data":"3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a"} Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.965321 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a" Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.229856 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" path="/var/lib/kubelet/pods/c182ef91-1ca8-4330-bd75-8120c4401b54/volumes" Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.515798 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.516307 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b5s99" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" containerID="cri-o://fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" gracePeriod=2 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.974440 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" exitCode=0 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.974551 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9"} Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.979631 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" exitCode=0 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.979689 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.047576 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048415 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-content" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048439 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-content" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048464 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048473 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048502 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048510 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048526 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-utilities" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048534 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-utilities" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048644 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048666 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.055337 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.057840 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.058313 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.069189 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.110300 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152727 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152915 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.161989 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254280 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254374 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254410 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254720 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254749 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254846 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.255018 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.256026 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities" (OuterVolumeSpecName: "utilities") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.267308 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z" (OuterVolumeSpecName: "kube-api-access-ft22z") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "kube-api-access-ft22z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.287925 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.289196 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.356352 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.356912 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.357033 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.381969 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.432076 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459541 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459622 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459718 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.460858 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities" (OuterVolumeSpecName: "utilities") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.470829 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv" (OuterVolumeSpecName: "kube-api-access-pdgqv") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "kube-api-access-pdgqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.473492 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561362 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561417 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561430 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.593484 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988119 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"092aa312ded9179826cf1c7718d79766d577bbc74bfdc3260b75b3acb73e6544"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988203 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988644 5115 scope.go:117] "RemoveContainer" containerID="262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.992226 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.992259 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.994028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerStarted","Data":"2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06"} Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.043462 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.045286 5115 scope.go:117] "RemoveContainer" containerID="ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.047062 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.087346 5115 scope.go:117] "RemoveContainer" containerID="f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.110701 5115 scope.go:117] "RemoveContainer" containerID="fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.129355 5115 scope.go:117] "RemoveContainer" containerID="935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.150480 5115 scope.go:117] "RemoveContainer" containerID="bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.476811 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vv5qk" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" containerID="cri-o://16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" gracePeriod=2 Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490090 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" path="/var/lib/kubelet/pods/8b758f72-1c19-45ea-8f26-580952f254a6/volumes" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490956 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490993 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:05 crc kubenswrapper[5115]: I0120 09:11:05.017559 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" exitCode=0 Jan 20 09:11:05 crc kubenswrapper[5115]: I0120 09:11:05.018237 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4"} Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.227759 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098c57a3-a775-41d0-b528-6833df51eb70" path="/var/lib/kubelet/pods/098c57a3-a775-41d0-b528-6833df51eb70/volumes" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.402996 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515460 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515501 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.517101 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities" (OuterVolumeSpecName: "utilities") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.523935 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq" (OuterVolumeSpecName: "kube-api-access-w4shq") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "kube-api-access-w4shq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.617921 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.617965 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.738539 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.821412 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.034617 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerStarted","Data":"92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e"} Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.037909 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"523e078e78e6cfb054a40a6916767e994deee00e08213d3cb61f49d65fa63001"} Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.037960 5115 scope.go:117] "RemoveContainer" containerID="16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.038042 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.050882 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.050861249 podStartE2EDuration="4.050861249s" podCreationTimestamp="2026-01-20 09:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:07.049263696 +0000 UTC m=+177.218042276" watchObservedRunningTime="2026-01-20 09:11:07.050861249 +0000 UTC m=+177.219639779" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.055601 5115 scope.go:117] "RemoveContainer" containerID="099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.078871 5115 scope.go:117] "RemoveContainer" containerID="5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.105452 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.109602 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:08 crc kubenswrapper[5115]: I0120 09:11:08.226761 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" path="/var/lib/kubelet/pods/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3/volumes" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.332514 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.334873 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" containerID="cri-o://42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" gracePeriod=30 Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.356597 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.357722 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" containerID="cri-o://30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" gracePeriod=30 Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.856046 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886103 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886779 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886802 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886819 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886826 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886834 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886841 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886852 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886857 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886865 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886870 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886880 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886885 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886906 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886911 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886924 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886934 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886943 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886950 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886964 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886969 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887075 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887089 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887104 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887113 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.898802 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.899271 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904245 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904837 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904935 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904644 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp" (OuterVolumeSpecName: "tmp") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905019 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905224 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905957 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.906013 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.906048 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config" (OuterVolumeSpecName: "config") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.921597 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.928107 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75" (OuterVolumeSpecName: "kube-api-access-lvb75") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "kube-api-access-lvb75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007327 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007387 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007440 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007485 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007513 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007570 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007584 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007595 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007606 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.047000 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077330 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077968 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077983 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.078097 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.085656 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.096354 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111302 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111389 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111432 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111467 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111610 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111937 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111973 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112038 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112124 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.114331 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.116694 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg" (OuterVolumeSpecName: "kube-api-access-g99kg") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "kube-api-access-g99kg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.116925 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117373 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117372 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config" (OuterVolumeSpecName: "config") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117738 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp" (OuterVolumeSpecName: "tmp") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.118149 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.119343 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.119470 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.123164 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.128926 5115 generic.go:358] "Generic (PLEG): container finished" podID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" exitCode=0 Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.128986 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerDied","Data":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129032 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerDied","Data":"ecc488089e2907ad65741a46b809cf94a5a4a9b7392b79f53726c2b0b4d5c94f"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129052 5115 scope.go:117] "RemoveContainer" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129395 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.130630 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.132939 5115 generic.go:358] "Generic (PLEG): container finished" podID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" exitCode=0 Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133223 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerDied","Data":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133306 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerDied","Data":"c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133417 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.148789 5115 scope.go:117] "RemoveContainer" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: E0120 09:11:16.149292 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": container with ID starting with 30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be not found: ID does not exist" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.149340 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} err="failed to get container status \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": rpc error: code = NotFound desc = could not find container \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": container with ID starting with 30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be not found: ID does not exist" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.149363 5115 scope.go:117] "RemoveContainer" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.167925 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.171118 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.173098 5115 scope.go:117] "RemoveContainer" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: E0120 09:11:16.174440 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": container with ID starting with 42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a not found: ID does not exist" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.174485 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} err="failed to get container status \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": rpc error: code = NotFound desc = could not find container \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": container with ID starting with 42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a not found: ID does not exist" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.181504 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.184536 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213220 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213603 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213712 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213796 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213920 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214024 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214175 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214242 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214296 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214349 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214426 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214500 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.222482 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.224952 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" path="/var/lib/kubelet/pods/c3904fe4-fb4d-4794-8d28-a76e420c437f/volumes" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.225495 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" path="/var/lib/kubelet/pods/ea354490-c1e9-4cb2-a05e-2691aa628f04/volumes" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316162 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316820 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316985 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317138 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317312 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317446 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317582 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.318236 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.318859 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.319072 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.323182 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.337708 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.411796 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.617581 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.619841 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:16 crc kubenswrapper[5115]: W0120 09:11:16.630487 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod941ddcdd_0183_45d6_929e_e4138126657d.slice/crio-8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00 WatchSource:0}: Error finding container 8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00: Status 404 returned error can't find the container with id 8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00 Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148491 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerStarted","Data":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148861 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerStarted","Data":"8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148884 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153387 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerStarted","Data":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153412 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerStarted","Data":"43d6fd31b5f6c85f09558bdd078897e3c86d6bae035ecb48d217d8449927c41f"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153426 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.191722 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" podStartSLOduration=2.191696043 podStartE2EDuration="2.191696043s" podCreationTimestamp="2026-01-20 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:17.184548792 +0000 UTC m=+187.353327342" watchObservedRunningTime="2026-01-20 09:11:17.191696043 +0000 UTC m=+187.360474573" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.207959 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" podStartSLOduration=2.207930528 podStartE2EDuration="2.207930528s" podCreationTimestamp="2026-01-20 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:17.204144167 +0000 UTC m=+187.372922697" watchObservedRunningTime="2026-01-20 09:11:17.207930528 +0000 UTC m=+187.376709048" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.473826 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.569394 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:22 crc kubenswrapper[5115]: I0120 09:11:22.564556 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.386863 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.387925 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" containerID="cri-o://04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" gracePeriod=30 Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.404294 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.404628 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" containerID="cri-o://458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" gracePeriod=30 Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.910138 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.941329 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942353 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942376 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942484 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.947735 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.950971 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.055647 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056204 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056315 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056353 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056379 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056556 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056599 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056634 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056639 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp" (OuterVolumeSpecName: "tmp") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056715 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config" (OuterVolumeSpecName: "config") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056866 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057034 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057060 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057042 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.072727 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl" (OuterVolumeSpecName: "kube-api-access-gwjbl") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "kube-api-access-gwjbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.073623 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.153460 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158121 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158183 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158233 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158272 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158323 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158413 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158430 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158443 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.159255 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.159471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.161043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.164767 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.186553 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.186794 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187151 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187171 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187285 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.193832 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.204306 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259367 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259463 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259543 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259586 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259691 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259756 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259954 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260047 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260094 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260143 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260176 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp" (OuterVolumeSpecName: "tmp") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260231 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260364 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260455 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260455 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config" (OuterVolumeSpecName: "config") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260953 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca" (OuterVolumeSpecName: "client-ca") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.261095 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.263518 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.263961 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.264857 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql" (OuterVolumeSpecName: "kube-api-access-pctql") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "kube-api-access-pctql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.290580 5115 generic.go:358] "Generic (PLEG): container finished" podID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" exitCode=0 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291164 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerDied","Data":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291207 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291235 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerDied","Data":"43d6fd31b5f6c85f09558bdd078897e3c86d6bae035ecb48d217d8449927c41f"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291282 5115 scope.go:117] "RemoveContainer" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293192 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293227 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerDied","Data":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293388 5115 generic.go:358] "Generic (PLEG): container finished" podID="941ddcdd-0183-45d6-929e-e4138126657d" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" exitCode=0 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293690 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerDied","Data":"8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.323344 5115 scope.go:117] "RemoveContainer" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: E0120 09:11:36.324128 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": container with ID starting with 458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308 not found: ID does not exist" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.324162 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} err="failed to get container status \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": rpc error: code = NotFound desc = could not find container \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": container with ID starting with 458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308 not found: ID does not exist" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.324185 5115 scope.go:117] "RemoveContainer" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.326400 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.328991 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.346785 5115 scope.go:117] "RemoveContainer" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.346860 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:36 crc kubenswrapper[5115]: E0120 09:11:36.347203 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": container with ID starting with 04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220 not found: ID does not exist" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.347223 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} err="failed to get container status \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": rpc error: code = NotFound desc = could not find container \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": container with ID starting with 04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220 not found: ID does not exist" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.350654 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361361 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361407 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361465 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361538 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361565 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361604 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361650 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361660 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361669 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361678 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361688 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.365441 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.365686 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.366343 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.368644 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.369272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.388303 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.519755 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.740692 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:36 crc kubenswrapper[5115]: W0120 09:11:36.745749 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbb2166_3ca6_40c1_8837_22587ad8df2e.slice/crio-368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8 WatchSource:0}: Error finding container 368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8: Status 404 returned error can't find the container with id 368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.973637 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: W0120 09:11:36.979158 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0393a6_c76b_4bd6_9358_0314c2eca550.slice/crio-16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd WatchSource:0}: Error finding container 16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd: Status 404 returned error can't find the container with id 16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302556 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerStarted","Data":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302621 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerStarted","Data":"368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302812 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306445 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerStarted","Data":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306507 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerStarted","Data":"16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306573 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.325116 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" podStartSLOduration=2.325093118 podStartE2EDuration="2.325093118s" podCreationTimestamp="2026-01-20 09:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:37.322540258 +0000 UTC m=+207.491318798" watchObservedRunningTime="2026-01-20 09:11:37.325093118 +0000 UTC m=+207.493871648" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.337759 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.348107 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" podStartSLOduration=2.348096755 podStartE2EDuration="2.348096755s" podCreationTimestamp="2026-01-20 09:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:37.346589544 +0000 UTC m=+207.515368124" watchObservedRunningTime="2026-01-20 09:11:37.348096755 +0000 UTC m=+207.516875285" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.387227 5115 ???:1] "http: TLS handshake error from 192.168.126.11:50856: no serving certificate available for the kubelet" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.132502 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.225666 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" path="/var/lib/kubelet/pods/0445ff5a-7f56-4085-98a2-35f8418fc9b5/volumes" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.226623 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941ddcdd-0183-45d6-929e-e4138126657d" path="/var/lib/kubelet/pods/941ddcdd-0183-45d6-929e-e4138126657d/volumes" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.955807 5115 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.978873 5115 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.979120 5115 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.979298 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980322 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980627 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980740 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980807 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980923 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981065 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981128 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981158 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981174 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981193 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981277 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981306 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981369 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981475 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981499 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981518 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981584 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981658 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981682 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981702 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981761 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982343 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982445 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982478 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982546 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982571 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982591 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982682 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982757 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983259 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983339 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983462 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983540 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.984053 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.991637 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.044139 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.082996 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083205 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083232 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083253 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083276 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083335 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083365 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083476 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083532 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083565 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.184871 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.184997 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185045 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185115 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185143 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185224 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185267 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185303 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185362 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185841 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186012 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186075 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186480 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186527 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186549 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186561 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.263580 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.264089 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.264589 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265280 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265551 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.265585 5115 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265805 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.356970 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.359280 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360220 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360273 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360291 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360311 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360318 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" exitCode=2 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.362755 5115 generic.go:358] "Generic (PLEG): container finished" podID="128ab750-3574-4f36-a27e-5bddc737a52d" containerID="92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.362890 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerDied","Data":"92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e"} Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.364184 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.466755 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.868309 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.378700 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:45 crc kubenswrapper[5115]: E0120 09:11:45.669972 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.772613 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.773942 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925782 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925862 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925944 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925989 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926070 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock" (OuterVolumeSpecName: "var-lock") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926560 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926576 5115 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.939524 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.028684 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.390511 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.391856 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" exitCode=0 Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.392096 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.393804 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerDied","Data":"2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06"} Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.393863 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.394045 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.395820 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.396698 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.397991 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398241 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398573 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398806 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433522 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433623 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433712 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433721 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433768 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433782 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433793 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433844 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434312 5115 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434348 5115 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434365 5115 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434854 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.436754 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.535648 5115 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.535705 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:47 crc kubenswrapper[5115]: E0120 09:11:47.271240 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.399020 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.431310 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.431717 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.606496 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" containerID="cri-o://cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" gracePeriod=15 Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.161212 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.162745 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.163702 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.164456 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.230401 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.265812 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.265950 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.266004 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267150 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267257 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.266043 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267990 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268120 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268192 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268317 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268531 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268716 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268768 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268831 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268998 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269040 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269182 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269941 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269993 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270019 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270045 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270071 5115 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.279186 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj" (OuterVolumeSpecName: "kube-api-access-2pzbj") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "kube-api-access-2pzbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.279287 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.280413 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.280846 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.281762 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.282284 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.282736 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.283297 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.283713 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.371981 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372063 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372114 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372135 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372219 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372240 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372292 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372311 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372329 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409585 5115 generic.go:358] "Generic (PLEG): container finished" podID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" exitCode=0 Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409755 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409787 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerDied","Data":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409826 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerDied","Data":"41ea8c623ecacb84e93a0bb70429c6d21f2263332366f0ca16d5017167557e81"} Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409847 5115 scope.go:117] "RemoveContainer" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.410621 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.410821 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.435236 5115 scope.go:117] "RemoveContainer" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: E0120 09:11:48.435956 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": container with ID starting with cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292 not found: ID does not exist" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.436031 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} err="failed to get container status \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": rpc error: code = NotFound desc = could not find container \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": container with ID starting with cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292 not found: ID does not exist" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.442044 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.442690 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:49 crc kubenswrapper[5115]: E0120 09:11:49.047704 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:49 crc kubenswrapper[5115]: I0120 09:11:49.048396 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:49 crc kubenswrapper[5115]: E0120 09:11:49.090067 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c657586494e4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,LastTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:11:49 crc kubenswrapper[5115]: I0120 09:11:49.423164 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"f880c11b80dde2953894863f4663242621b5298262f11f219e74f37d19d8d8c4"} Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.219948 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.220767 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.436656 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247"} Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.436945 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.437400 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: E0120 09:11:50.437439 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.437696 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: E0120 09:11:50.472208 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Jan 20 09:11:51 crc kubenswrapper[5115]: I0120 09:11:51.446535 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:51 crc kubenswrapper[5115]: E0120 09:11:51.447394 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:52 crc kubenswrapper[5115]: E0120 09:11:52.599947 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c657586494e4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,LastTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:11:56 crc kubenswrapper[5115]: E0120 09:11:56.873813 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="7s" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517381 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517474 5115 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447" exitCode=1 Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517527 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447"} Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.518405 5115 scope.go:117] "RemoveContainer" containerID="cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.518909 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.519593 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.520045 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.217518 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.219475 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.220197 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.220771 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.236071 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.236120 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:11:59 crc kubenswrapper[5115]: E0120 09:11:59.236797 5115 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.237329 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: W0120 09:11:59.260717 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0 WatchSource:0}: Error finding container b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0: Status 404 returned error can't find the container with id b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0 Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.545510 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0"} Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.550645 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.551047 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d52c89544587359b9809e7538e1334a5902e517df87226da8b50b669ba88e727"} Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.552218 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.552593 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.553012 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.236642 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.237519 5115 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.238139 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.238526 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.312039 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.566517 5115 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="130c63fa2a2cbc202bdebd1ad19f2a89021c9e25f31c646f25e6d24d2fda1d10" exitCode=0 Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.566627 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"130c63fa2a2cbc202bdebd1ad19f2a89021c9e25f31c646f25e6d24d2fda1d10"} Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.567345 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568033 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568106 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: E0120 09:12:00.568745 5115 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568801 5115 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.569445 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.570015 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.582528 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6c43dcdb283f3aa5109a0fc20e7f80d16f4889cfdfa6b195593fcb5764f51caf"} Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.583076 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"63077e0dbd463f50e32ddcd38c795763f097acc01ad341160025ace225579c96"} Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.583096 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d252061aed5999272626544b964803b9e3f1e7313dfb41b17be61902d46b66ef"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.597463 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"00deeca7107d97e93b957a8f41ee4451022c262f5c4bed7b87afa4cf4f77ebcf"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598001 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598025 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c658a796eeda2ba4ece7dce49af08bbbb29572226fb175cea183c3f2b4286a0e"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598142 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598178 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.237743 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.238124 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.245495 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.819144 5115 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.819495 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.896755 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac6b65b-44a1-4768-aa23-062028f72cae" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.482766 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.482845 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.638488 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.638538 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.643775 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac6b65b-44a1-4768-aa23-062028f72cae" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.300495 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.307813 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.656538 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:17 crc kubenswrapper[5115]: I0120 09:12:17.870047 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.112359 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.271731 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.488206 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.914860 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.063408 5115 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.094837 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.280293 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.314485 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.431924 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.598377 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.690812 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.129248 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.135939 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.246243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.313358 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.576106 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.686278 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.699570 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.713534 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.811961 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.895853 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.916557 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.981278 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.983600 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.079319 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.115537 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.180622 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.275996 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.370791 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.446478 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.550398 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.551332 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.552141 5115 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.711660 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.726809 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.727133 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.787341 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.899997 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.923357 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.934760 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.998863 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.111815 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.115136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.158817 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.215001 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.257475 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.282250 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.316255 5115 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.325385 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.325487 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.337111 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.337953 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.366104 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.366080096 podStartE2EDuration="15.366080096s" podCreationTimestamp="2026-01-20 09:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:22.362305453 +0000 UTC m=+252.531084023" watchObservedRunningTime="2026-01-20 09:12:22.366080096 +0000 UTC m=+252.534858636" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.419833 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.642405 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.660881 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.693013 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.696800 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.713017 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.730827 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.791585 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.882367 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.925751 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.926219 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.099877 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.177930 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.209542 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.250127 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.292114 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.459244 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.481193 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.523473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.557037 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.685375 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.688243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.795945 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.876061 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.886935 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.887661 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.892992 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.904739 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.927432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.974129 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.999732 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.007397 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.021427 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.023142 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.107092 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.119395 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.129581 5115 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.184181 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.192773 5115 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.239802 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" path="/var/lib/kubelet/pods/73f78db9-bab5-49ee-84a4-9f0825efca8a/volumes" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.322596 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.334109 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.360239 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.408458 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.450584 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.512186 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.545238 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.618727 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.633257 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.702634 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.722697 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.731118 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.754378 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.791454 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.806015 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.848282 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.881872 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.979953 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.116201 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.116252 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.138190 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.299315 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.412976 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.873071 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.875541 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.875765 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.951870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.965157 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.040958 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.143367 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.199505 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.218954 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.278378 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.315100 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.339516 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.343164 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.346812 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.460736 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.557254 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.592038 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.592108 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.678948 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.719001 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.731378 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.880311 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.985551 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.024547 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.030092 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.058922 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.061702 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.119559 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.214466 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.228767 5115 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.306791 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.358598 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.378479 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.484526 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.611931 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.684362 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.749255 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.761787 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d5c987897-r9s5c"] Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762807 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762842 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762888 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762928 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.763103 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.763134 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.792641 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797651 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797670 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797750 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798263 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798276 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798679 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.799848 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.800013 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.800034 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806158 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806434 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806167 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806883 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.811979 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.812868 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.816343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.820872 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856205 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856279 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856372 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856454 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856525 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856568 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856600 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856642 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856682 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856821 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856915 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856960 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.857003 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.891109 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.891548 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959231 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959308 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959384 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959465 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959555 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959639 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959752 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959814 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959858 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959968 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960014 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960077 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960125 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960163 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.962107 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.963392 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.963419 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.964972 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967829 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967867 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967881 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.970805 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.971288 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.971369 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.973046 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.973582 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.984147 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.001304 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.130327 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.219321 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.342165 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.343970 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.432635 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.576777 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.612021 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.678365 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.788417 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.988801 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.090494 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.205007 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.214495 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.284052 5115 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.284446 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" gracePeriod=5 Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.415247 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.445549 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.515960 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.591589 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.636781 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.653832 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.668382 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.770417 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.781260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.802070 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.885745 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.925351 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.927839 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.080094 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.115385 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.164202 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.245519 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.304020 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.314143 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.397993 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.404591 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.410387 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.567543 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.608379 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.685101 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.726573 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.785134 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.818844 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.101885 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.120205 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.278078 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.300940 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.427017 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.628312 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.711060 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.828625 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.891999 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d5c987897-r9s5c"] Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.894137 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.963330 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.097165 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.211386 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.355111 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.439252 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.482240 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.570543 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.826777 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831515 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" event={"ID":"8645f26f-7d64-4135-94fe-7b89b8f4484a","Type":"ContainerStarted","Data":"2a6d38ea66188b4dc9fbd34e1083c3ee3c881d72f7487cad89f80c82aacad543"} Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831569 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" event={"ID":"8645f26f-7d64-4135-94fe-7b89b8f4484a","Type":"ContainerStarted","Data":"3bf21765f71fe46f8bd1ca0017ec2ac3c4e1755182d9b057882d8e552348a522"} Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831946 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.861359 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" podStartSLOduration=70.861333763 podStartE2EDuration="1m10.861333763s" podCreationTimestamp="2026-01-20 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:32.859745248 +0000 UTC m=+263.028523878" watchObservedRunningTime="2026-01-20 09:12:32.861333763 +0000 UTC m=+263.030112333" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.957614 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.036072 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.045632 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.117047 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.142871 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.238045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.353441 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.397162 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.604283 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.922713 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.435182 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.435355 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.438181 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.573983 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574152 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574201 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574276 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574361 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574462 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574491 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574560 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574530 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575433 5115 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575470 5115 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575488 5115 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575507 5115 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.587740 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.677060 5115 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919394 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919485 5115 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" exitCode=137 Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919658 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919701 5115 scope.go:117] "RemoveContainer" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.952388 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.954813 5115 scope.go:117] "RemoveContainer" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: E0120 09:12:34.955471 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": container with ID starting with 4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247 not found: ID does not exist" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.955524 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247"} err="failed to get container status \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": rpc error: code = NotFound desc = could not find container \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": container with ID starting with 4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247 not found: ID does not exist" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.393277 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.393595 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" containerID="cri-o://f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" gracePeriod=30 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.404441 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.418589 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.418986 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" containerID="cri-o://694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" gracePeriod=30 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.424783 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.888149 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.891999 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.905361 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.915537 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916158 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916175 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916192 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916198 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916214 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916221 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916324 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916333 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916342 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.919602 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926393 5115 generic.go:358] "Generic (PLEG): container finished" podID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" exitCode=0 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926484 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926602 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerDied","Data":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926648 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerDied","Data":"368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926682 5115 scope.go:117] "RemoveContainer" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931678 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931729 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931800 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931863 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931912 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931939 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931963 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932033 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932077 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932165 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932401 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932438 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932459 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932515 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.934710 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.934872 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp" (OuterVolumeSpecName: "tmp") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.937171 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config" (OuterVolumeSpecName: "config") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938058 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp" (OuterVolumeSpecName: "tmp") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938260 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s" (OuterVolumeSpecName: "kube-api-access-fr57s") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "kube-api-access-fr57s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938431 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938495 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938695 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config" (OuterVolumeSpecName: "config") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.939652 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.941481 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.941876 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.943099 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k" (OuterVolumeSpecName: "kube-api-access-k8q8k") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "kube-api-access-k8q8k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946011 5115 generic.go:358] "Generic (PLEG): container finished" podID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" exitCode=0 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946214 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerDied","Data":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946245 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerDied","Data":"16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946349 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946421 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.950753 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.957824 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.964426 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968332 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968550 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968676 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968808 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968949 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.969110 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.969683 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.976351 5115 scope.go:117] "RemoveContainer" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: E0120 09:12:35.983045 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": container with ID starting with 694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea not found: ID does not exist" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.983091 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} err="failed to get container status \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": rpc error: code = NotFound desc = could not find container \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": container with ID starting with 694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea not found: ID does not exist" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.983121 5115 scope.go:117] "RemoveContainer" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.020145 5115 scope.go:117] "RemoveContainer" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: E0120 09:12:36.021084 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": container with ID starting with f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026 not found: ID does not exist" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.021120 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} err="failed to get container status \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": rpc error: code = NotFound desc = could not find container \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": container with ID starting with f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026 not found: ID does not exist" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.033947 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.033990 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034016 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034032 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034048 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034095 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034146 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034183 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034208 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034228 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034278 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034290 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034298 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034307 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034316 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034324 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034333 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034340 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034349 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034357 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034366 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034731 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.035598 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.037471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.038197 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.043332 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.047958 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.050069 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.056414 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135597 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135761 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135813 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135953 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136045 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136413 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.137253 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.137540 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.139034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.140561 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.152922 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.226639 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" path="/var/lib/kubelet/pods/0e0393a6-c76b-4bd6-9358-0314c2eca550/volumes" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.227386 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.239131 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.277237 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.288196 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.294142 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.425731 5115 ???:1] "http: TLS handshake error from 192.168.126.11:53218: no serving certificate available for the kubelet" Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.226165 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" path="/var/lib/kubelet/pods/6dbb2166-3ca6-40c1-8837-22587ad8df2e/volumes" Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.482965 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.483152 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:12:39 crc kubenswrapper[5115]: W0120 09:12:39.032876 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a019ddb_06f4_46e8_b51d_4ff472d661f7.slice/crio-c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f WatchSource:0}: Error finding container c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f: Status 404 returned error can't find the container with id c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f Jan 20 09:12:39 crc kubenswrapper[5115]: W0120 09:12:39.092814 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40317894_58cf_4fd9_bbfe_0338895305fb.slice/crio-ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c WatchSource:0}: Error finding container ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c: Status 404 returned error can't find the container with id ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.982755 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerStarted","Data":"d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.983123 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.983133 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerStarted","Data":"ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985388 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerStarted","Data":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985415 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerStarted","Data":"c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985787 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.994170 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.011731 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" podStartSLOduration=5.011708253 podStartE2EDuration="5.011708253s" podCreationTimestamp="2026-01-20 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:40.004988912 +0000 UTC m=+270.173767482" watchObservedRunningTime="2026-01-20 09:12:40.011708253 +0000 UTC m=+270.180486793" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.026611 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" podStartSLOduration=5.026575816 podStartE2EDuration="5.026575816s" podCreationTimestamp="2026-01-20 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:40.020402031 +0000 UTC m=+270.189180571" watchObservedRunningTime="2026-01-20 09:12:40.026575816 +0000 UTC m=+270.195354346" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.234353 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:41 crc kubenswrapper[5115]: I0120 09:12:41.948027 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 20 09:12:45 crc kubenswrapper[5115]: I0120 09:12:45.728505 5115 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9gfdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 20 09:12:45 crc kubenswrapper[5115]: I0120 09:12:45.728949 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034257 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2"} Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034175 5115 generic.go:358] "Generic (PLEG): container finished" podID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" exitCode=0 Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034882 5115 scope.go:117] "RemoveContainer" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.043622 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb"} Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.044796 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.048423 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:12:48 crc kubenswrapper[5115]: I0120 09:12:48.084415 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:12:49 crc kubenswrapper[5115]: I0120 09:12:49.906415 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 20 09:12:51 crc kubenswrapper[5115]: I0120 09:12:51.228484 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 20 09:12:51 crc kubenswrapper[5115]: I0120 09:12:51.658213 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:12:52 crc kubenswrapper[5115]: I0120 09:12:52.512215 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:12:53 crc kubenswrapper[5115]: I0120 09:12:53.626958 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:54 crc kubenswrapper[5115]: I0120 09:12:54.134639 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.370887 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.371135 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" containerID="cri-o://d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" gracePeriod=30 Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.397416 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.398059 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" containerID="cri-o://a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" gracePeriod=30 Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.946967 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.977805 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979122 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979151 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979299 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.983333 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.992989 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046763 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046804 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.048391 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config" (OuterVolumeSpecName: "config") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.048465 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.049116 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp" (OuterVolumeSpecName: "tmp") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.053071 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4" (OuterVolumeSpecName: "kube-api-access-svdp4") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "kube-api-access-svdp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.053086 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112727 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" exitCode=0 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112816 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerDied","Data":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112833 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112862 5115 scope.go:117] "RemoveContainer" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112851 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerDied","Data":"c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.114388 5115 generic.go:358] "Generic (PLEG): container finished" podID="40317894-58cf-4fd9-bbfe-0338895305fb" containerID="d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" exitCode=0 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.114470 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerDied","Data":"d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.138985 5115 scope.go:117] "RemoveContainer" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: E0120 09:12:56.139392 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": container with ID starting with a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506 not found: ID does not exist" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.139436 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} err="failed to get container status \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": rpc error: code = NotFound desc = could not find container \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": container with ID starting with a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506 not found: ID does not exist" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.142174 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148626 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148776 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148880 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149030 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149085 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149101 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149113 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149125 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149136 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.160266 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.168148 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.189418 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.190747 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.190795 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.191102 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.200106 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.200296 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.233774 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" path="/var/lib/kubelet/pods/3a019ddb-06f4-46e8-b51d-4ff472d661f7/volumes" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250465 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250526 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250612 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250654 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250799 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251046 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251096 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251131 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251164 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251203 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252315 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config" (OuterVolumeSpecName: "config") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252608 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252668 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251819 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.253586 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp" (OuterVolumeSpecName: "tmp") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.253908 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.256443 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2" (OuterVolumeSpecName: "kube-api-access-vwtk2") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "kube-api-access-vwtk2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.256991 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.257941 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.268562 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.310469 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352646 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352789 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352810 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352840 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352885 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352943 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352958 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352972 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352983 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352993 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.453977 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454547 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454570 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454598 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.455064 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.455294 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.456074 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.459445 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.466055 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.477607 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.515735 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.718964 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:56 crc kubenswrapper[5115]: W0120 09:12:56.722676 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008b7b41_90a9_4871_a024_a4a8736d5239.slice/crio-f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51 WatchSource:0}: Error finding container f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51: Status 404 returned error can't find the container with id f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.920786 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: W0120 09:12:56.929074 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e383ce_519b_41ba_8dda_d0d71e14316e.slice/crio-c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615 WatchSource:0}: Error finding container c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615: Status 404 returned error can't find the container with id c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615 Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.121721 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerStarted","Data":"3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.121773 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerStarted","Data":"c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.123180 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.124468 5115 patch_prober.go:28] interesting pod/controller-manager-5498596948-x8xdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.124515 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125104 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerDied","Data":"ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125131 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125148 5115 scope.go:117] "RemoveContainer" containerID="d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127083 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerStarted","Data":"487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127108 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerStarted","Data":"f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127307 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.147592 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podStartSLOduration=2.147571675 podStartE2EDuration="2.147571675s" podCreationTimestamp="2026-01-20 09:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:57.144304882 +0000 UTC m=+287.313083422" watchObservedRunningTime="2026-01-20 09:12:57.147571675 +0000 UTC m=+287.316350215" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.176916 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" podStartSLOduration=2.176863127 podStartE2EDuration="2.176863127s" podCreationTimestamp="2026-01-20 09:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:57.17309336 +0000 UTC m=+287.341871900" watchObservedRunningTime="2026-01-20 09:12:57.176863127 +0000 UTC m=+287.345641677" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.189373 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.194212 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.563198 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.682566 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:12:58 crc kubenswrapper[5115]: I0120 09:12:58.148832 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:58 crc kubenswrapper[5115]: I0120 09:12:58.227479 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" path="/var/lib/kubelet/pods/40317894-58cf-4fd9-bbfe-0338895305fb/volumes" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.240885 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.342910 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55676: no serving certificate available for the kubelet" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.449393 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 20 09:13:00 crc kubenswrapper[5115]: I0120 09:13:00.575268 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 20 09:13:01 crc kubenswrapper[5115]: I0120 09:13:01.399585 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 20 09:13:01 crc kubenswrapper[5115]: I0120 09:13:01.674870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.462555 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.707481 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.867357 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.888312 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 20 09:13:05 crc kubenswrapper[5115]: I0120 09:13:05.926576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 20 09:13:07 crc kubenswrapper[5115]: I0120 09:13:07.570587 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483417 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483481 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483532 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.484064 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.484119 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" gracePeriod=600 Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.215508 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" exitCode=0 Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.215612 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.216422 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} Jan 20 09:13:10 crc kubenswrapper[5115]: I0120 09:13:10.379049 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:13:10 crc kubenswrapper[5115]: I0120 09:13:10.379063 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:13:13 crc kubenswrapper[5115]: I0120 09:13:13.629595 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:13:27 crc kubenswrapper[5115]: I0120 09:13:27.372049 5115 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.058172 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.058996 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" containerID="cri-o://487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" gracePeriod=30 Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.395265 5115 generic.go:358] "Generic (PLEG): container finished" podID="008b7b41-90a9-4871-a024-a4a8736d5239" containerID="487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" exitCode=0 Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.395411 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerDied","Data":"487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d"} Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.572317 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.637058 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638011 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638041 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638171 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.642854 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.648658 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671535 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671601 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673050 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca" (OuterVolumeSpecName: "client-ca") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673123 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673165 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673600 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673870 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp" (OuterVolumeSpecName: "tmp") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.674640 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config" (OuterVolumeSpecName: "config") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.681136 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42" (OuterVolumeSpecName: "kube-api-access-qsn42") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "kube-api-access-qsn42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.681514 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.774867 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.774970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775196 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775347 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775438 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775633 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775661 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775671 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775682 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877115 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877178 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877238 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877289 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877338 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878157 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878879 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878934 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.884725 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.894156 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.967160 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.394068 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.404286 5115 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414452 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414450 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerDied","Data":"f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51"} Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414668 5115 scope.go:117] "RemoveContainer" containerID="487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.424068 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" event={"ID":"6f410e5c-783d-4416-890a-e2290c4e3505","Type":"ContainerStarted","Data":"11e018dcaaf20f96c4bd4c428aa42ac2297f804e1ff02a43d5e968fdc1f8730e"} Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.460186 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.469146 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.225486 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" path="/var/lib/kubelet/pods/008b7b41-90a9-4871-a024-a4a8736d5239/volumes" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.435419 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" event={"ID":"6f410e5c-783d-4416-890a-e2290c4e3505","Type":"ContainerStarted","Data":"73bf3ff3a8dd27e05a7843dd71052334ace1849d2e63752f441a16d140483dba"} Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.436561 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.445543 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.460435 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" podStartSLOduration=2.460415454 podStartE2EDuration="2.460415454s" podCreationTimestamp="2026-01-20 09:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:13:38.458120972 +0000 UTC m=+328.626899552" watchObservedRunningTime="2026-01-20 09:13:38.460415454 +0000 UTC m=+328.629193984" Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.343831 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.344495 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" containerID="cri-o://3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" gracePeriod=30 Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.545825 5115 generic.go:358] "Generic (PLEG): container finished" podID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerID="3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" exitCode=0 Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.545929 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerDied","Data":"3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583"} Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.328543 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362301 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362963 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362978 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.363265 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.369665 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.381251 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.443923 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444021 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444107 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444132 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444585 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp" (OuterVolumeSpecName: "tmp") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444809 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445514 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca" (OuterVolumeSpecName: "client-ca") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445518 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config" (OuterVolumeSpecName: "config") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445960 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446212 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446246 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446459 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446792 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447014 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447324 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447536 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447693 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447824 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.448014 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.454773 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.458102 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42" (OuterVolumeSpecName: "kube-api-access-q5j42") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "kube-api-access-q5j42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.550971 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.551966 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552142 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552342 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552483 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552516 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.553686 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.553878 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.554350 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556266 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerDied","Data":"c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615"} Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556327 5115 scope.go:117] "RemoveContainer" containerID="3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556424 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.557047 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.580120 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.584168 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.636655 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.639758 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.687097 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.109398 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564787 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" event={"ID":"149d552c-3752-4b8b-9802-83d80439f19c","Type":"ContainerStarted","Data":"7400aecff61d1c18fd4f4f9e6c8d1231c82954e46b428fbe93c4d4bf520b0aa9"} Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564842 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564858 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" event={"ID":"149d552c-3752-4b8b-9802-83d80439f19c","Type":"ContainerStarted","Data":"d013ac858bd01f79cf9e34a4e4f968a81f1d3e43f533b329f176b509fb2ca5b8"} Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.585333 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" podStartSLOduration=2.585314701 podStartE2EDuration="2.585314701s" podCreationTimestamp="2026-01-20 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:13:57.584292925 +0000 UTC m=+347.753071455" watchObservedRunningTime="2026-01-20 09:13:57.585314701 +0000 UTC m=+347.754093231" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.936310 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:58 crc kubenswrapper[5115]: I0120 09:13:58.225684 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" path="/var/lib/kubelet/pods/16e383ce-519b-41ba-8dda-d0d71e14316e/volumes" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.114321 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.115820 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrnvw" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" containerID="cri-o://12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.120467 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.120756 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dlnj" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" containerID="cri-o://c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.126947 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.127245 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" containerID="cri-o://fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.140454 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.141121 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5plkc" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" containerID="cri-o://094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.153994 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161094 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161531 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-45pv6" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" containerID="cri-o://3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161245 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.174142 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.331797 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332237 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332286 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332320 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433694 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433750 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433800 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433819 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.434505 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.435127 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.439765 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.452033 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.479093 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.517139 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559423 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559595 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559734 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.564813 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426" (OuterVolumeSpecName: "kube-api-access-25426") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "kube-api-access-25426". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.565692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities" (OuterVolumeSpecName: "utilities") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.605091 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660842 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660881 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660908 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.671384 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.671477 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673837 5115 generic.go:358] "Generic (PLEG): container finished" podID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerID="fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673915 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673936 5115 scope.go:117] "RemoveContainer" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.681305 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.681764 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686169 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686391 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686440 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"ba3c29f3ff3951d423c587bfc54fde3036fb68c70ae8bcabcb0199b3d1a764a2"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686541 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.688714 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.688935 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.689751 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.717005 5115 scope.go:117] "RemoveContainer" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.729836 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.733677 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.743225 5115 scope.go:117] "RemoveContainer" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.767456 5115 scope.go:117] "RemoveContainer" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786079 5115 scope.go:117] "RemoveContainer" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.786498 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": container with ID starting with 12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4 not found: ID does not exist" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786530 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} err="failed to get container status \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": rpc error: code = NotFound desc = could not find container \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": container with ID starting with 12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786556 5115 scope.go:117] "RemoveContainer" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.786946 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": container with ID starting with 9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4 not found: ID does not exist" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786977 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4"} err="failed to get container status \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": rpc error: code = NotFound desc = could not find container \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": container with ID starting with 9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786994 5115 scope.go:117] "RemoveContainer" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.787217 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": container with ID starting with 641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77 not found: ID does not exist" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.787242 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77"} err="failed to get container status \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": rpc error: code = NotFound desc = could not find container \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": container with ID starting with 641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.802445 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.813851 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.826714 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.862601 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.862883 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863168 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863294 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863482 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863620 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863741 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863862 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864031 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864159 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864313 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864614 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.867446 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868036 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp" (OuterVolumeSpecName: "tmp") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868074 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities" (OuterVolumeSpecName: "utilities") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868257 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities" (OuterVolumeSpecName: "utilities") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868600 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities" (OuterVolumeSpecName: "utilities") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.870176 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm" (OuterVolumeSpecName: "kube-api-access-4msjm") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "kube-api-access-4msjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883453 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7" (OuterVolumeSpecName: "kube-api-access-x6cc7") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "kube-api-access-x6cc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883468 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv" (OuterVolumeSpecName: "kube-api-access-xwzdv") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "kube-api-access-xwzdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883681 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.887432 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.888400 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv" (OuterVolumeSpecName: "kube-api-access-l6hvv") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "kube-api-access-l6hvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.919600 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965675 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965700 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965708 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965716 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965724 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965732 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965741 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965749 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965756 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965764 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965773 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965781 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.973355 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.025776 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.069122 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.226123 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" path="/var/lib/kubelet/pods/e388c4ad-0d02-4736-b503-a96f7478edb4/volumes" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695288 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695679 5115 scope.go:117] "RemoveContainer" containerID="3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695343 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.697792 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"ba9e935cd9dbcccba3373b56114fb5112e6bd4ddbcf850c03f77ef25fb786214"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.697847 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.700055 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.700070 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"b623557fb8fa89838a7fffcb0c7e471eeaf77057e10e543a3504832324b27404"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703065 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" event={"ID":"b75152d4-1e91-4c11-8979-87d8e0ef68a5","Type":"ContainerStarted","Data":"f91f012d8d51da192c2cb70d076583a067b4976c8cd68a303b4e31a65ccfbe92"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703108 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" event={"ID":"b75152d4-1e91-4c11-8979-87d8e0ef68a5","Type":"ContainerStarted","Data":"67a4a7f9483a6190b221913a94005d524b343bc29b1ea84d548cf0fd3b574ebf"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703687 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.708726 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.708880 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.711531 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.713954 5115 scope.go:117] "RemoveContainer" containerID="1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.723505 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" podStartSLOduration=1.723489861 podStartE2EDuration="1.723489861s" podCreationTimestamp="2026-01-20 09:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:14:10.721857207 +0000 UTC m=+360.890635737" watchObservedRunningTime="2026-01-20 09:14:10.723489861 +0000 UTC m=+360.892268391" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.736342 5115 scope.go:117] "RemoveContainer" containerID="09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.743267 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.748918 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.753849 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.759340 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.762984 5115 scope.go:117] "RemoveContainer" containerID="fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.779120 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.784659 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.790614 5115 scope.go:117] "RemoveContainer" containerID="c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.812944 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.814321 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.830838 5115 scope.go:117] "RemoveContainer" containerID="a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.868970 5115 scope.go:117] "RemoveContainer" containerID="06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.889977 5115 scope.go:117] "RemoveContainer" containerID="094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.911095 5115 scope.go:117] "RemoveContainer" containerID="74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.925655 5115 scope.go:117] "RemoveContainer" containerID="292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.731313 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732482 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732520 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732541 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732554 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732571 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732583 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732612 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732624 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732639 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732650 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732670 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732681 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732727 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732741 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732757 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732769 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732788 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732800 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732815 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732826 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732841 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732852 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732889 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732930 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732945 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732956 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732969 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732980 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733141 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733173 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733191 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733216 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733234 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733257 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.773239 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.773379 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.783253 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792173 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792295 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792441 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893734 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893885 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893975 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.894620 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.895110 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.920973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.104128 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.233869 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" path="/var/lib/kubelet/pods/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.235142 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" path="/var/lib/kubelet/pods/3984fc5a-413e-46e1-94ab-3c230891fe87/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.235968 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" path="/var/lib/kubelet/pods/57355d9d-a14f-4cf0-8a63-842b27765063/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.237498 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" path="/var/lib/kubelet/pods/f9d4e242-d348-4f3f-8453-612b19e41f3a/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.570636 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:12 crc kubenswrapper[5115]: W0120 09:14:12.579441 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad987c3_e453_432f_8c54_3c7a336446f9.slice/crio-af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7 WatchSource:0}: Error finding container af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7: Status 404 returned error can't find the container with id af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7 Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.725155 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.730140 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.735972 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.738621 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.750908 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7"} Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803340 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803622 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905349 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905709 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905913 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905992 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.906342 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.929407 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.118962 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.532713 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.769047 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49"} Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.769111 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"bda71f569251559a5dda6b1fd60e8b4e4deca6ef39824928c7f9a95fcce2a666"} Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.773461 5115 generic.go:358] "Generic (PLEG): container finished" podID="aad987c3-e453-432f-8c54-3c7a336446f9" containerID="5af5cfd237071b03c8f1cb8f38c284b6d8474e8eadda0d6f831afb21a4c3a022" exitCode=0 Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.773550 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerDied","Data":"5af5cfd237071b03c8f1cb8f38c284b6d8474e8eadda0d6f831afb21a4c3a022"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.084153 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.090498 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.093306 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.141361 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.146062 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.146209 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.149022 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238789 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238852 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238885 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238952 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238981 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.239003 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.239071 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.271652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340673 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340758 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340997 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341147 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341195 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341235 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341296 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341335 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341544 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.342616 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.342879 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.345842 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.351015 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.352369 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.363723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.365987 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.443575 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444199 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444381 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444754 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444810 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.466250 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.469065 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.477569 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.787758 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.789968 5115 generic.go:358] "Generic (PLEG): container finished" podID="9a5b59fd-dfe1-4370-8768-28c4a001c9e3" containerID="5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49" exitCode=0 Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.790029 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerDied","Data":"5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.882784 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: W0120 09:14:14.892768 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b6fa77_d9ae_4530_8ee7_9c67130972e0.slice/crio-f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7 WatchSource:0}: Error finding container f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7: Status 404 returned error can't find the container with id f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7 Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.978304 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:15 crc kubenswrapper[5115]: W0120 09:14:15.001758 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf226cf_7ac3_4329_a01c_54a92f0189f8.slice/crio-7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0 WatchSource:0}: Error finding container 7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0: Status 404 returned error can't find the container with id 7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.525110 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.529844 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.533702 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.535780 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663139 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663441 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663490 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764603 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764646 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764730 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.765141 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.765169 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.796255 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.797420 5115 generic.go:358] "Generic (PLEG): container finished" podID="aad987c3-e453-432f-8c54-3c7a336446f9" containerID="bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05" exitCode=0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.797518 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerDied","Data":"bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799427 5115 generic.go:358] "Generic (PLEG): container finished" podID="cdf226cf-7ac3-4329-a01c-54a92f0189f8" containerID="af44ae2deaab214d2fa993da72d2f6a6652798315b1ab10ffc25a3206614468d" exitCode=0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799600 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerDied","Data":"af44ae2deaab214d2fa993da72d2f6a6652798315b1ab10ffc25a3206614468d"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799686 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerStarted","Data":"7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.803949 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.805565 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" event={"ID":"68b6fa77-d9ae-4530-8ee7-9c67130972e0","Type":"ContainerStarted","Data":"5a458c0a79818bd65bde3fefe7db8a798ff478182650ae0a031ef73983042e68"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.805601 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" event={"ID":"68b6fa77-d9ae-4530-8ee7-9c67130972e0","Type":"ContainerStarted","Data":"f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.811924 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.862554 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" podStartSLOduration=1.862436271 podStartE2EDuration="1.862436271s" podCreationTimestamp="2026-01-20 09:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:14:15.862292708 +0000 UTC m=+366.031071238" watchObservedRunningTime="2026-01-20 09:14:15.862436271 +0000 UTC m=+366.031214841" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.863615 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.279335 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:16 crc kubenswrapper[5115]: W0120 09:14:16.296811 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea34b88_772f_448a_ba98_33a5deda3740.slice/crio-40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900 WatchSource:0}: Error finding container 40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900: Status 404 returned error can't find the container with id 40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.815721 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"32439bae146c329d1d7d55b6eb0230034190d0ac960506954672ab7530573ab6"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.820121 5115 generic.go:358] "Generic (PLEG): container finished" podID="9a5b59fd-dfe1-4370-8768-28c4a001c9e3" containerID="863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae" exitCode=0 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.820215 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerDied","Data":"863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.824756 5115 generic.go:358] "Generic (PLEG): container finished" podID="3ea34b88-772f-448a-ba98-33a5deda3740" containerID="4ce0d5d6ba15819cf5c63b70361421a9ac213971329750f111f55c3a49b6e8f7" exitCode=0 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.824812 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerDied","Data":"4ce0d5d6ba15819cf5c63b70361421a9ac213971329750f111f55c3a49b6e8f7"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.825029 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerStarted","Data":"40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.832102 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fz98h" podStartSLOduration=5.056787576 podStartE2EDuration="5.832085716s" podCreationTimestamp="2026-01-20 09:14:11 +0000 UTC" firstStartedPulling="2026-01-20 09:14:13.775109681 +0000 UTC m=+363.943888211" lastFinishedPulling="2026-01-20 09:14:14.550407821 +0000 UTC m=+364.719186351" observedRunningTime="2026-01-20 09:14:16.83001666 +0000 UTC m=+366.998795190" watchObservedRunningTime="2026-01-20 09:14:16.832085716 +0000 UTC m=+367.000864246" Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.842119 5115 generic.go:358] "Generic (PLEG): container finished" podID="cdf226cf-7ac3-4329-a01c-54a92f0189f8" containerID="ae6a2033d223f73e842bc54ca33772fbb34c935004ebe6d3c3590cb8b32d00b8" exitCode=0 Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.842236 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerDied","Data":"ae6a2033d223f73e842bc54ca33772fbb34c935004ebe6d3c3590cb8b32d00b8"} Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.845970 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"39d1c11030ba364157213f20de2ba3fe0ea5ee65763dfb9f969e7f1e088bf790"} Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.875991 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ckvv" podStartSLOduration=5.176124837 podStartE2EDuration="5.875969567s" podCreationTimestamp="2026-01-20 09:14:12 +0000 UTC" firstStartedPulling="2026-01-20 09:14:14.790906373 +0000 UTC m=+364.959684903" lastFinishedPulling="2026-01-20 09:14:15.490751093 +0000 UTC m=+365.659529633" observedRunningTime="2026-01-20 09:14:17.872974216 +0000 UTC m=+368.041752776" watchObservedRunningTime="2026-01-20 09:14:17.875969567 +0000 UTC m=+368.044748107" Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.853946 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerStarted","Data":"871d5162bbab49621e927f92365411bb769bd2349ca0295ff75fd24aad381f56"} Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.856115 5115 generic.go:358] "Generic (PLEG): container finished" podID="3ea34b88-772f-448a-ba98-33a5deda3740" containerID="bc0c9d83ba57010823d073ae4e064475414173b2e3a7dec40dc8810f5a7485f8" exitCode=0 Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.856242 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerDied","Data":"bc0c9d83ba57010823d073ae4e064475414173b2e3a7dec40dc8810f5a7485f8"} Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.872359 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wbbcl" podStartSLOduration=3.594420485 podStartE2EDuration="4.872341394s" podCreationTimestamp="2026-01-20 09:14:14 +0000 UTC" firstStartedPulling="2026-01-20 09:14:15.800084505 +0000 UTC m=+365.968863035" lastFinishedPulling="2026-01-20 09:14:17.078005414 +0000 UTC m=+367.246783944" observedRunningTime="2026-01-20 09:14:18.870977387 +0000 UTC m=+369.039755927" watchObservedRunningTime="2026-01-20 09:14:18.872341394 +0000 UTC m=+369.041119924" Jan 20 09:14:19 crc kubenswrapper[5115]: I0120 09:14:19.863128 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerStarted","Data":"277f00fb916af87255d45dbd71d4e12a5c6d49416d37aafc1b909e1ea277f2ad"} Jan 20 09:14:19 crc kubenswrapper[5115]: I0120 09:14:19.879784 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vl5h2" podStartSLOduration=3.466517403 podStartE2EDuration="4.879765539s" podCreationTimestamp="2026-01-20 09:14:15 +0000 UTC" firstStartedPulling="2026-01-20 09:14:16.82594797 +0000 UTC m=+366.994726540" lastFinishedPulling="2026-01-20 09:14:18.239196156 +0000 UTC m=+368.407974676" observedRunningTime="2026-01-20 09:14:19.878618818 +0000 UTC m=+370.047397368" watchObservedRunningTime="2026-01-20 09:14:19.879765539 +0000 UTC m=+370.048544089" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.104857 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.105314 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.145331 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.943333 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.120114 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.120349 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.157476 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.942149 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.478220 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.478406 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.522160 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.941380 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.864047 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.864352 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.920974 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.969154 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:36 crc kubenswrapper[5115]: I0120 09:14:36.831196 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:36 crc kubenswrapper[5115]: I0120 09:14:36.886334 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.204462 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.227112 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.227550 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.229826 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.230930 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334674 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334812 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437256 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.438461 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.446973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.468945 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.550428 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.965628 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: W0120 09:15:00.975910 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbec78294_76de_4a69_ba13_bf1bc31bd32f.slice/crio-f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984 WatchSource:0}: Error finding container f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984: Status 404 returned error can't find the container with id f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984 Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.154353 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerStarted","Data":"2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55"} Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.154404 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerStarted","Data":"f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984"} Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.171637 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" podStartSLOduration=1.171616868 podStartE2EDuration="1.171616868s" podCreationTimestamp="2026-01-20 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:15:01.171069224 +0000 UTC m=+411.339847764" watchObservedRunningTime="2026-01-20 09:15:01.171616868 +0000 UTC m=+411.340395398" Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.962496 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-b674j" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" containerID="cri-o://658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" gracePeriod=30 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.161147 5115 generic.go:358] "Generic (PLEG): container finished" podID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerID="658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" exitCode=0 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.161266 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerDied","Data":"658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca"} Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.163090 5115 generic.go:358] "Generic (PLEG): container finished" podID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerID="2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55" exitCode=0 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.163289 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerDied","Data":"2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55"} Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.379068 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563359 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563455 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563933 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563999 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564051 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564279 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564473 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564663 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.565383 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.567085 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.577184 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.577519 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb" (OuterVolumeSpecName: "kube-api-access-v7mcb") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "kube-api-access-v7mcb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.578554 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.579202 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.580507 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.589612 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666768 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666809 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666821 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666839 5115 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666851 5115 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666862 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666873 5115 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194156 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194234 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerDied","Data":"d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717"} Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194598 5115 scope.go:117] "RemoveContainer" containerID="658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.230993 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.231043 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.448354 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490308 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490411 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490477 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.491296 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume" (OuterVolumeSpecName: "config-volume") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.496128 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.496243 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d" (OuterVolumeSpecName: "kube-api-access-wlz9d") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "kube-api-access-wlz9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592049 5115 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592081 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592090 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204088 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204111 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerDied","Data":"f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984"} Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204171 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.230810 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" path="/var/lib/kubelet/pods/580c8ecd-e9bb-4c33-aeb2-f304adb8119c/volumes" Jan 20 09:15:08 crc kubenswrapper[5115]: I0120 09:15:08.483001 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:15:08 crc kubenswrapper[5115]: I0120 09:15:08.483921 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.554696 5115 scope.go:117] "RemoveContainer" containerID="6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.582368 5115 scope.go:117] "RemoveContainer" containerID="732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.611606 5115 scope.go:117] "RemoveContainer" containerID="4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.637065 5115 scope.go:117] "RemoveContainer" containerID="7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.659041 5115 scope.go:117] "RemoveContainer" containerID="f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006" Jan 20 09:15:38 crc kubenswrapper[5115]: I0120 09:15:38.483361 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:15:38 crc kubenswrapper[5115]: I0120 09:15:38.484145 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.139430 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140552 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140566 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140577 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140582 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140681 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140699 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.162218 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.164994 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.167732 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-7txkl\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.168071 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.168473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.311323 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.413389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.447008 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.500000 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.986677 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:01 crc kubenswrapper[5115]: I0120 09:16:01.665028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerStarted","Data":"b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97"} Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.692797 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerStarted","Data":"c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51"} Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.717243 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29481676-t6krr" podStartSLOduration=1.6373720189999998 podStartE2EDuration="5.717212012s" podCreationTimestamp="2026-01-20 09:16:00 +0000 UTC" firstStartedPulling="2026-01-20 09:16:00.998208174 +0000 UTC m=+471.166986714" lastFinishedPulling="2026-01-20 09:16:05.078048177 +0000 UTC m=+475.246826707" observedRunningTime="2026-01-20 09:16:05.71005731 +0000 UTC m=+475.878835890" watchObservedRunningTime="2026-01-20 09:16:05.717212012 +0000 UTC m=+475.885990572" Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.755626 5115 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-d8t2v" Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.793141 5115 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-d8t2v" Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.703142 5115 generic.go:358] "Generic (PLEG): container finished" podID="c6a29366-7d58-427e-a357-043043b83881" containerID="c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51" exitCode=0 Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.703338 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerDied","Data":"c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51"} Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.794470 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-19 09:11:05 +0000 UTC" deadline="2026-02-14 00:31:48.277195595 +0000 UTC" Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.794520 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="591h15m41.482680488s" Jan 20 09:16:07 crc kubenswrapper[5115]: I0120 09:16:07.795208 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-19 09:11:05 +0000 UTC" deadline="2026-02-10 11:40:13.537305609 +0000 UTC" Jan 20 09:16:07 crc kubenswrapper[5115]: I0120 09:16:07.795706 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="506h24m5.741608834s" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.002163 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.039852 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"c6a29366-7d58-427e-a357-043043b83881\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.046398 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk" (OuterVolumeSpecName: "kube-api-access-vfqsk") pod "c6a29366-7d58-427e-a357-043043b83881" (UID: "c6a29366-7d58-427e-a357-043043b83881"). InnerVolumeSpecName "kube-api-access-vfqsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.140792 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") on node \"crc\" DevicePath \"\"" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482663 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482715 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482753 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.483215 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.483262 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" gracePeriod=600 Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716024 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerDied","Data":"b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97"} Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716382 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716480 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718828 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" exitCode=0 Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718876 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718996 5115 scope.go:117] "RemoveContainer" containerID="95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" Jan 20 09:16:09 crc kubenswrapper[5115]: I0120 09:16:09.727741 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} Jan 20 09:16:10 crc kubenswrapper[5115]: I0120 09:16:10.818530 5115 scope.go:117] "RemoveContainer" containerID="cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.156635 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158346 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158376 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158529 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.181391 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.181544 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.184512 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.184618 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-7txkl\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.185425 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.276560 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.378691 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.414034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.513120 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.808075 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: W0120 09:18:00.813605 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd4b257_185a_4876_9eeb_4d69084bad68.slice/crio-2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe WatchSource:0}: Error finding container 2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe: Status 404 returned error can't find the container with id 2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe Jan 20 09:18:01 crc kubenswrapper[5115]: I0120 09:18:01.525286 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerStarted","Data":"2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe"} Jan 20 09:18:03 crc kubenswrapper[5115]: I0120 09:18:03.541732 5115 generic.go:358] "Generic (PLEG): container finished" podID="3bd4b257-185a-4876-9eeb-4d69084bad68" containerID="aaee7dc4a03126bb7351fc6e6855c258363d9583e2c9910d8ea9adb20ddc6909" exitCode=0 Jan 20 09:18:03 crc kubenswrapper[5115]: I0120 09:18:03.541808 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerDied","Data":"aaee7dc4a03126bb7351fc6e6855c258363d9583e2c9910d8ea9adb20ddc6909"} Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.806613 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.957218 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"3bd4b257-185a-4876-9eeb-4d69084bad68\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.964018 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj" (OuterVolumeSpecName: "kube-api-access-wvjnj") pod "3bd4b257-185a-4876-9eeb-4d69084bad68" (UID: "3bd4b257-185a-4876-9eeb-4d69084bad68"). InnerVolumeSpecName "kube-api-access-wvjnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.059051 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") on node \"crc\" DevicePath \"\"" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554703 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554761 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerDied","Data":"2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe"} Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554791 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe" Jan 20 09:18:08 crc kubenswrapper[5115]: I0120 09:18:08.483267 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:18:08 crc kubenswrapper[5115]: I0120 09:18:08.483741 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:18:10 crc kubenswrapper[5115]: I0120 09:18:10.460083 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:18:10 crc kubenswrapper[5115]: I0120 09:18:10.461802 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:18:38 crc kubenswrapper[5115]: I0120 09:18:38.483137 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:18:38 crc kubenswrapper[5115]: I0120 09:18:38.483818 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.483656 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.484746 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.484844 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.486074 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.486187 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0" gracePeriod=600 Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.631923 5115 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.984888 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0" exitCode=0 Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.984948 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.985036 5115 scope.go:117] "RemoveContainer" containerID="91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" Jan 20 09:19:09 crc kubenswrapper[5115]: I0120 09:19:09.995519 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"3c8d58d8b9258defba8eb8fcd56ea4a754ea8ca5ded8c883cc93464635be9331"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515133644301024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015133644302017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015133642371016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015133642371015462 5ustar corecore