var/home/core/zuul-output/0000755000175000017500000000000015136772741014542 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136775752015513 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000117310315136775606020274 0ustar corecore{ikubelet.log][sF~_˾hJU+KN%k]STh=._4@e7ӊLb}˹w$p'ƃIVgi2ۛuV/TԥhEZ?{}W ]65*Q3ǂ'Swe+e^)Yo&S!I.Zd,d3x͗\O>6iD;YI>ib_:Lz%29>tgӣɵEV͛Ju9jys|)K Q;Y×9z4zTTGrkڶpM~Y:5k, Z, I>umWn"REsY\N.ϦbcvͿq*;5ZrrzAt89׿3TQ0r /󩺛_y2rr{sۈsܡ2?&?T|k1@3< $*ib^4j= \6-ѫݟGf1o%`?fr)n?8-/')*x[F#%,V7gKմ2kN xa|}&r@/ܛr(OV\|Zz*AܒOBB3xytzu+]^(@9WxcXd5u#KoeǏۺ`Nϙ۸]qHz|)@y&Q*r%.伦s#8J/IQ[-TvI&͕qI</Aoj , Z ]- R6.DvCRK߿,e{ShI2{OHP|XP3]^7k9/C zOމwrE?A*Q^/䬽Rz#>ڤ {P,1-Z^;8p"̧ Q_Ur -pւ/ϗb Ʀpڻ; #Dh0|哨L2Ǟ6Jj%?h(_qTK^5 ES/Llg$R\weVW=ehSw!k5:mJ" \"k|EoչIa@ d5R? ^ DPI^B-0:aGQȄGk5,9'TQ$ Z-aE-Ff}⣨ cHԾQE"0I+g.~=?;)sx 圶dI<8=pTqS^,EA#EȮFaX6*tFM,P¾=Iu+L"-_\iLD%2.vT-Tb#I*M , F*m'|" ^ɲ9Z((B+ɇ9LB)\EL l?geˣ4ZLJ>;]~0%,}>00r0Vu￳>|0 EYӌR+JO"|`!!a e,LѸ,rq2[doX7Xc3F2Xb,"m, dQ}d p`fQ:E k QbרBl1XE!eKQ#H?6c\/8vP<%8!c6C W9i X ([>nK[:@"I:%A= Yz;(,1;aI+HGsu Kx$ Dm9zxf2dn0Xu眣FҰNpZWBvahPa T8Ƃ:xtO糐QlX484.wK`UC]$-5iKގ`jeI}hߐ8'1xtREFCCg<_FGoQ%J~g>b 7H^ۀE&]m 0/eT? 4BQ#h [@$v%hC3Xf>E,LGWcT43Q׀8ҡpiKQϘ6DIx/@^Ո w(1Dv_gaa%A\L6ذY0\[HDN}K%&@})QpiĆM-Vh\OXId?*P, KB`;=ŌwE{LO×?Ye\`4QX>&b$v^HKCmXaK}j[a$ּld i`p3 L3 ؝l\ٖgC-Դuq-9/U-6b {%8'&cle1ĀVGMpW-Ya Y$HT 35VQv2iK@7%{{^I;))߳I@tm !p˾UnFʋ,)`ac#0)l&ɘ̌= [IL{oD%>g2G)6W<#U hN9]5r?: t < yR;+fm D4S3%z`H p /JwDn@?L̠X8e#(a&rx4`U`]T>_rLb<jmƠd԰t*QQCϏ=lzT Zp0q jUm blS?|cKcj}a e~\/DA{SA3kjZb艤{bAU9oŶeI nA)t <ȩ4Mi&iLT5 cͷ?:~z9ÑY#U*:j 14ޠ¥f8UtVi@P=72EhGhPmW }rϷ+B?2gܾ>)h[w*3at&Z82=%Ծ}H <-p!>&#9߽mo\]>˽Ɓ >Km<#3vVh.GB#iTxDw0~W|4^VH%AT!P!tM0D_5$GRl$Q/~~Wf}54F%$2^8~@ЇJ#xhXTO5#BR͏Kyp!{ƑMxNY,&ܒnI&X$1!CquJ%nu<K^gV\ж]`AYT/; :ȗJZ4 >/[U?zӨq$*_4#a 0QZ2ͶvX'ח ݕ,KSY!^=uQѝh{æ!'bBBçS,#$ F_߈wTQѾ;Q՗~#HQR|rN~qۙKZ F$h L҈ҳ7돰e7J-l2?Of9;r2"t2T> n⒠ESFpu/c8RwU:D#td*ј}8 >9h:=!Kc]Ky7PSE ?J1=T4*_H^qZ 8ŶZ$Bߡ/$T rʧ-^/F~n@:%|_Q}OHU!X%XkeHi6L잗K *l-lYIkeǺcZxƱ_!`nz]tSLfvE Kt-y%;in~!)[rT-j[|}$W'nɉ4PkrpGkyzm/2-]u1\Fzܜ9*|Z+c;LWя.=jPCgi\|9mO%Hqu 0Cwڟh"|nȺC %I^8R_YG# ?O^sً?8YGS+N9Nť=2<7}`X 1M,`7urkߋPRoAߑu֞d-c#FolS8Bc%7듨p/ fy#RȨSbҡ}q힩 2 28=u,&ZacXh@C )ca~\+k5GW$AO@@dpvoQxxlc M=q,ֽm!z=9(^ $ taݵ#QDzf?}p NC >x D@BAؓ%4 nn ^8M{,c "ܻOߙ$J.q^ȆcٯÁ;=z ohk.޸I(}ɬc`ӑ=2w:0\`A}9ip0zB Mvz]k6I >@p_d5v&p;.wzFps8~r@;GK_=eG(z:q\f'!% ,Kc^Џ$Xpл{S׃Ȃn>X# ގiT  Mvkux[t.Vt$n^ x<{ Ãl& zgMvPm{ to=CWAC@[Swu09lq7WQoj?&UvCJsɍCne:2ƹDy}6aMX>"H[Wża `BbBu؜LNkzW"_)Y$s!!sy"YfzYK :"OexR{t\SR_k ؖ fț6p|Gr 3'i {YN(jWq^tcYfdUpS&HVJtc^XVO>~|s >0p1x!<=v4'a*E~'tA>}l*mqPI)1[$vUu  eez|2!o 4At_!Ew3d%!oEd8P[擊Q_yJV}Ԯ 0];"G|YxW㵫^-^]˗{X`vÃMbx{U6**gF.}σ'?MLU&lIua܍ۄ4md0mCoHC$ZЉUجrNׇ#ӷ_Dv"؂'rvƭӫ Χ~Uf8/_+rȉ-&pm <Νz *9=+yɛ|RDw* N/cl%חō(\b.(~O94Ч .ש6V ]qM1#$ㄧXN9[ nQR`2lL16`&<jHb6H8x`ص@|ԅ![Rs,P(ޗ=?zZ f&Gȣv!)#@MD3޺ ؓ0h#pͶMa}E(yo֟ SpPڡXm {tOh pv9sS؜'6jsX$ϡ>lɏ6vh-hLJ1|aMzGhAI< NJ"92՘MAcЁ<nNQ^Xam#A:Սˁ6&1\MhQ\72[V]2c'y'Fv1][ݦ, ,%$C8 g}/KQD( .M,&QGJ;"ۋBDmN$qh#t|lު#)GX@3N< ":G,v~q/HI$.ȇb^M&tuTNXWF>\;6$ٻN]' D[q-̃ rnef:E3y@GK:4x {ggBV" [J'F(}&tB7n3JwUa[n5]ك:mj-uٍT癔:[ u ݞPw7BgnA&ۂPo{BI~P B w#&mB- '4؍[ 4ܞp7BgnA(kʶ mO(ۍPLB6| B$?@.y\ĹIhX;T@͟&'l wx4@@^\瓙Epm:c&7X.HrYTR)۔!A#"6eH\ M;o6.۵1ub?=!)_Xpx=\: ODeV "-U9` v&=7#8 B'G{|Q Xذ50@F:C݉ZS%RE~*ՠD͆4j-<j" %fTulkΜD İш(tG}Yf+u`)ܑ$I:>Y"2+2JDȣڙ߿?9E 8рT~]?~ݏ-r)"<Պ0qI3tZ2b/0ƺ)LH hZYӢӏPK8hKC ͚ݬt9,̗Жc3hO̯LtP)=Hr}$m@?e \9<%.%\2!$S[5 #`cCï8A[¬$DQ_pNNk5?;ȤU)s!H}~3?>X3 Bқ2?#͔]>mUÙ> *w F+~"=m6`l 6 xp\Q]`F}KpYHZЍ7˚Yeu#E<+RL8 'WR sM^3ш' 6UjrnT( $m&X׈ؾ {TUR4X,bje R6T iT'$s&2֛PǝKIp *'WTLwGxSw-r{/PJ$mH^QB4pFkp@Ʊ˳|*`v& ZjhxU)<\wʊQrM1iԷ/YUl%p I Xou״@U7_:.Rrˎm ^a^Ƶdt k:DF:"Ŷh*n h2D3࿸P)#߶(s>'Gv7֫YڄV^rWXnʜٶv.FVF½QюE}/Av / ͷS-ABiLJM2mݫC-I]ƺFX#Uɢ!"7W"ZqtFjJ1\Q¡.[jE[۵R:C({WR]憞Nڞk7JcJF. mjz}=(~ f,넆>n:|k!i+se_`X|0kަW )+vEfX&ճQNbYTY? d,+@ul겋k*xi>"Y}ZIz-N pp{HX4yhbΫY<ݿo`Ud ="z͈6-ݘhxg\KG>0D.mS1^^%0+ehHy )4R/*δlx"m6X<8R.Zf0F쯳6EyKB ֫`HS9 RudҔFw268HIatA6W_%l?}=x_KSl>D"OYFxF"AEk?VfgS7:xdD_!5a=GNzZ")wmm%~v~`< $<.L"el>ck([K|{}[\9|כF57=iJCZˡe][G61|7SM/le ~bou:' F앧=F|?}{5Ȝr%{%'(s~Y,u3^ۓ~Y>S>p`*{l;EH=C:6^UG3ݳl 1O t$H|}:CI;/D[v:LEm L8ݙ|uyI'Π}z_rٓ/7}wﯧ Vن9{޲1[NڰDS_&4z\DbOq~qgZ{rLF994d8'OpLDz36&Zvaӯ ַ-j< [fuj=R`>~_;{ ˫';l&K=X ̾q rM6:<S6CpnGT_k%.S`|/.ϸjdcz35Ws _{݋fS;+a9կhԇ(Lr53RnfFas՘ x[8fUniMց ?["/(lzD=V1KK_m#Шx/-u*\ @$HD7ÛO]4v(Iy^@*1 d@AYI0=]sH 41hPAFpE(zpIkd%?s1wЏ2ZB xSYc ͎P$AlصzaCzPgPhcBv)`}lq?2ÆXSO̓ '  :殛 9+p(`gƪC%\DgMCw DSrD3I ~ 5x2b,(1 B>;SCEoz6u15kp^x8jh!!P4(vv/8ZqTg=k?[]_?.8*96w`<֊j "7xv! h^i8R$x~iS svh˨eVJi8R$>v8*eQ‚A %9vꉺ")ۦfEXWW]hw^BypIEiƙ" cr} Z]:43єrS͋L 7|?[iVTvY'gIN$u ;,@M' n ŭZ:>-K IIӲګ^S$ˏhvV]y2*c̎#EұCh?>>88j *LcX- 5mRpH:qw.8ڱ_|L{Fe kbqaHt=f@ErёQL$H:W[siQ\גּR.fmo!WU;j#ջhr ?zG'j$tV䣌;@?~x9I 8FL'VX&]x)΢} jp7b) :'Ș *jq_9G>b F4C6>!~`% @zk z"\1ȩ;fBzyn3E1ـZw]pTBY*ow򳚦}>iڣ3x}]pvEW6f]P[fG\q%r89=Cm<ߪFiK~ :u[aL8UKtt7;u7Jͤ9(̘o{@8z>9 siƶbff{bY"..(ϫ5vFBZoXɤnzx3gj#$ԽW6]ptT Gv<1^|hZjգ1` :DeY$T.cM68>+eQGf F#ΩP$u3;5ߑ2} GNϊbebHaY9ʲU :<:+:sFWvpIШj'q(4R^ BZyHi!Sw" Eҗ7 ֟U,JiЅ*Nri](bi߬ @#Jj^`opvwI,ݬ>-k`4?:_;xTCG|dQDj„"rߺ75["j+cr)bz^g/k=uq~0#Fzj@Ϙ{aѥ tNm}zf_jB:yIVNWHTe$(;h_6\\[78H.G,83(QU@UbQx+IGۻtu#wRBP!O:梩^)"w<ot/iyTWȂjeFos9pV$*?!ĭ;bQYup'}']*NRf8o¸LlCLG*#-rwդxףwڝgT x!(]p?pI|/5a-!, i>BU<'ǗŃ/.:!S$wt8Η l B~DQotoH8u*quNJGL 漺Ο6^ T^N77-HWқmW w7z U8 u`)CShR0XcftQR,->zsv~VRybC@9jsڬw^ީ nhk-͠XڀI%|H_H:^tV'k9<堜I/wz+OHnyXiźK sPÎvR킣=՛3)7{7x[Щ{sFOB:Ϊ~vHTIsE_ ,n zHۂD~0`{!x?8OB fzY  tSp".=p\[7@b ;'?걧qTHF0Ƀ"1=| H:RY.?8Nj0u那0guM\}.>TAE8u&nV4m^#aVj"T1F?I1Qbh&DHt̥[wQDD<|_HsuR;_۩7ќpӵSҏ*TBkD,&"ܾ߃E&8IN(MbLh3,| c<)2)kO+bC@~ iZ4&mb sNרZ 9 .zڡ0mӤ‰3@sʰtXeʕ(eyvOHm0blW"V@KɥȻBtm?Do+IPm'_znLsEZKz 4XJV*0.tg٦An6nk:*M.FhG\kZl=cƗyT5Jhm/ l f32X9mULk0VSwM.Kp(&{]Ե*MzC$0G?$LYƥV-|)6L1r'8`;"P wma!܀Y2oYLI]l3L'Ljw0V8ð8)A9!<%J(IjVo1=u$DVS=\X]MXzY'3Ҷ ڌ/!w ]t*{Ily+6n\ 3"(ai{fl" =v8?:d21TZx9aKUBUbl.1r,(ͨK~ 9 JdpanrZk ̖M/|` }VWRH 𖙻Ѱ`.3:ʧ/{<;wU_>Ylۄ+ K-A3a811N fUa:[uA}/w‚b] Xt{ngI<1Khab"KIHt6#BamX=7WTiDFUe٦K`o, 8ZL!&e(4ZۙmFs|Z$FR ጠAU]תh8U+^W'h;{{:so2 v`ULANϙE]VmB+?޳47%r@s2jI)-k3Lxz{v @kĠ =9@*&X[U$1uU 2\i r+_$VmFc!X qktw::ʩ߃.g8OWr' ;*ڜ{`Ņ0͚>c43˳O<IZ'UTYɉF:P=&؉(mv^Ii~p2EElW/ݐk6orUWTи>'I{^^ѕvfJwkft'~qb2-a57RxN.X4QEܻHMEV1*6c'BqisB8U*2Ib'k8 yWG>b [/ӣ߬U4=E/|~A;w~*YGObNIWmF?/ֈ9=Z H: [I6Q5UnMmW0RG 80c|tt:I VVY6]\!U'G-&䇣xI;GAUr$IdDe:10$}d|ڽl۠nl}L]e_B?y%d)Q5n%m0TJ J"dڄwF^KG} ~/};9"mu I$'Q 4s[=JyLS]I$8-Mo"*AmcKUvI;Fz=b,7Kx90M(Nw_ѦpB`bDcLL**66ޗUnHHG͆pڠ(  6&NM,]!w /\\~Um~6K;bnR/<;

D> ;^#|1:~oͪ6{(_.6S[Iq*%gv~aXL }&Vk.qRR}:A]CMmV4k_[oF_u7~~yq91/b2+v}sy zF(T<(x]y ~=o9`^ܾxnC;J#hg޻S0Z! r3,%Hm3WPI٥/{o-/GGRY x~ɣxEp x)!H 瞾e*T3Uq].%B|/]s3r|$%03 9=h~+^[M5\sh)vnݽa$U}%g5 9^R9ص򭱗loA8g(G I z&rP?' qr*R{8Yp،_ElnsQ?~of Q Cv*i"WpJ!"A(C8X¦̲D)c dí9b%C`D]_tb)b_,ɜā¿JyPEuFI#Ofz>d%6!n`95Us27p)4Z"#*\sT0 3)IF%y*IdkE`ח?^l,W@OEVٯ63GѩGi}Zawefs-^Oӏ|Ż$QiwɿKΤцDfVQN-4QZ1%ΕE(MuJR!MN[ou߮/5(?5glsu79$S90iZ"Rge23L8AyNW0nNJukN `FSsƶ&ghɘ YXhSnpE@Ƥ9ڹ-e*DQRD匮/]_h*(DɇL1ɇM=u3MWM>hBM?QlO8J5U颱N@^^X'-Qmȝ][]p lZaF)8]Tn9 q:hԭ]FпuZu!@f} ;*|7?em1Hkw~F_obqԺqhzKd^A8;ɗ/'R9=r7@GNr Ff>u6 -R7lpixpZ۲^xPdUY,l#UO: ZY~;xg؋`r)kmH_'` B~=9$k_k^:Qrg_FJ3Di8^]]m$LL B5?VO?~ۣsZocX~l6x|W4^gyMM:sئy=,_1l2WΦe!ƞ|Mv(+wҊ8p^{m,p~ _{d4ԿGZtioK RZ Eo1Lx˜1w ݛ.)|ML={Cs-U .;goIu0v]Y<ypxx4lV{حgw&2Lw2Ηgջe-j2|Η͑eROB^#HUKswE*Poߋ}m3eB BȾ~Tm[ zWܽ AFKzՊר(?}X~8~wU?0OhkҖ 9B?ǼOV]뿇A?Dh ZTU*ݕk$(..ce!|{ 8/-fQԶvVW,S>"6Gֆ+Ȇ/Z\hG!K"KMxP*}$.gab'>gӭ*KY89ڳ9Zϳ'g?pj!B/6c#{ތw xd) huh/}6,*p]Ч$.]߾;u|d%NHObF1cВGɈq:h 9-Y<>7&Ĩw۞jo*~A;XV"M(J̀JpFk%ZCˤ}" wF/ pVFY^E-#A9EThQԀd*rđ#ЌvᲰM>Z+"*֙ UNUE:YNFlNMem ZkmO ]1ԢC!R(9bI-$$u(4N4d025o_6BzKR2*9 *ǀL 몵$k}+W乖Qs18+^HDD%(I%/ \Lҫmt|C\e*M"ҳ(X\"R,YG+eM&#EKDt,wq\ jEPVLh25k+4WFHz56fgX.xɓ,z "A̡Cr  ѓmpK 9R{)g%N(-D"G$x,RX#((.ӂdө<;9o2SA # $O-Q{OEaALc6u|)())jW&'j~ !=1Tλ(ܧ8|ڋ)Q$R(qj(bK,5j*1"N/1b:0ULrCeӨU0(|lI%"bUhLp2nQ?O}"rg;#j U([uI%",G8bʉMΧ$\ʀ"$P*-@; GH8RGT"!e0Ix"g '"E:#c S?2o(\?&E5G*4 Y5VMJ0\D ar#S :ZhQdZ_GcIbFrS7|LD+OFd AΣ01(C>EKK:po:̖@ CZ%8גDCJzFCT<"%*P9T #uE*QPɯ8ժP)F\rl?fIIRC֖$t#P#p O4hYcS/ib o*$a--h0}U'oէ8S[~0`o-IM-{_кBe@Evja> ǟz ݟVӝd)y=xs`ޑ(ri:'&EtB9N7A -"bZm PV3a9RB83\(hUS %$\^$UKI7'󜥒=85p|z \O3)K+i,&r}l"+¯ϧ|*'d}p2ɗ?#g+1eWWƖ5.n뭖dj dN-fڀ$`Bv'75C,tzt}N>DwaڧPua_FR3BSc];Wޜx~hfwKB,rF<ù!԰]*/Ps];9yvEVd7@t+#Rȡ=vYOm؝rg{(R>PmtDyh/@8 UEqpbx``c> -"p=zcBRFХ frEoWDSaq^Ta wgCiz}Po3aD(5z=i'TT'3`CѓR=Z=  DC3lApqf.!E#=Tw֋ epJRA{CԨg˕*J1TT0T0d7yM |eaukrd ߜ$NbG{/>9_ܸ=˺]{kч}!{z3Mh vK}_uc}4df.*PB;C05.&՘.a..8 *(>j1ˎM dϺG`ρ[9ɔK&!d:iD,"_ $R'LHJXpEKUpb\@ H Y.1u@mt/ Fbyq"qHf`e`5~ɯsX(*!rC l"QuB t*VB8E?n1(/4+h3 הGwC0yE'ѣ.52"ç5Ǽ|K& kIG] A{MkR7pj>t]wBV![eXzHERzD-ZȔ.9&pNi»JG{+cug21Ί(L:\BݬV-_=;:ܛOo.{;vA=_f7e|h[\%b۱$~#DoKScUg;qMO{l38J[Ο縳7@Y1Lg8:l$7UI H ZjUsdLK"]wt"8]NWx vrM+BM,z"L ] Ѝz$5q6ZnxVWd@T<5~;_ Ӏ6Ci> vx+q vN(o/ZoRxҟ>)9n|mIv!ۈmKitÔ&,&9o嗞S_ј׸ 7_eO~yyRN.0|3{`UTd|e?b^AxWWGAX0 I|8ob품8 VLΣΗ8߷ǟXS90{3JWk+41T"cwP!W#@3\%j$!fF~[EG˦$O;" 9<.ڭZiJ-)Ik` r|pU{WWJ2w$?,kVa L%|~}5{Eh)JcWa.v@] jU2l"lUe ÙXE& ]x Z._⾈0=dX]P"JooVwݾI_M,3޷3]%Տe>,}J74!wy`6#->؟كÃO7W=!r> Ӣ-Xvf8/}TbG.?4]էհ ASh%UWCl9ƥ]L)a(i`UW cV+I)!l)~EӀ}rw*pw:f|Q39!dP!3-'h ܛ9b|CVC,pX%06F+RWl9uh!*m "[\FQ|R zN8 :W=qR%rW9/,w)˝d4'iSCPji")%4+ҖR,jeE-Ѣ;zm::mY`˼?ѣX{{ezyw1/&%K+2{;M'L=?˞ՔU~7Q9Z'0(4jupM>GIB'S|/':y{g?_P}aۘq)Aeft~9Clӟ_ۼpW cŋgr74g/n횽[mIO\ 9}n5}ݮ.o[^q'dӚ__||͉7i}Xw8aJC7ך]GVY/=M?Ny .W1xO#Κnh7wA6I3~#7ۗ߬;ј1,shى9OGOmx+Hne1HW`N!zqmWZ&Ps^-.1mTV_(ֵsթA!x5 j]juՇ~9s?3wzF둇R^Uz^`cIJ.l\&vn|j\<%OYDl Mהxmr< M?LCFtD&Ʃ ٔⰕG ѰP5z>>os7z=_nU{/\_\/[{궸EHZ]4{M3ws.ooo|u,,o]xgQij.rl]ߟ7׮-cUӵ,_hUbb@oBo $OZoj\i-UWlK_C c殯^h,,՘TK Q c2%LYVf E˲Aȴ0E'fB2Ploj{Y3T݁T#J_R-Yl/w|d ^GeGlTxo{37l-0^WuV>ʮ[ll FS+40[l^߶fTl{ 3ZFPF /UF,*SpLzÕO2 _jB-vS2WvG_[\ _fmvNҵQ\A>tA%}.>g}+#{kƓaϮ%}> i"5"ՠZ5|'U<|Z4RF%}>ىQyL b{0=E9T^_XiUޭ~/;s9.f ]^nu M}ݵYKEr}MjW }Jҋˮ99pv}JXU/RUZ=/Gvlh]3܀3D^:5T=E^Djh sXru۹^onQŬRvA-ڡ"XӢ ɐ'3XuAj:H>`Orq=ΗVv}ӣǎmcW\O#'$!H6 agw<t^G)DP{v{LT#Cu 1$D(5NCٮpvXI4sP 9" H1$RȮWQ;ƹAT@q~.cV B%i2H6ڗľ֡xTjPON1 S<$%11Tm1P-z]]bl쵴H683("M(#Ax_~}+TZ{l\ͦij '45ڐJҽ  C'Al2H>cd[jkb`ɴ`B_30Zv5MSA~֖1O`ƺ|"y(n4i !n9".9UվCwM˟=-'KRTT}4aNR0`R|`B^)C~06!eI΢I!!Kh&ɇ%2ǮKe6xYYqJSEq|GǶyWk(4Mo;rhT_gSA>o[ moz=GCYdrdcv9T0#Qn2H6s=I$X(/=Հs Rs N#($.y0RW7<{| Fb YGN|_×lOv/W]sKtMk!}<Mo;n?`_DEP AMI锭҅00٪tʞ)K5%0 رX21!H>$ cݫmS!sҦJiS%(|,<:A/Pg$OF:Zq;[ׇRq}y5<r6ŇCnZɆJ%8JX+)H6AyЁD]ņcCڣ a8rIZ!)LB|" d !\@-ra5Ue94Ib 2ۀH! UAAywJbc˜)͡)̡rM@ّKPyBlTzU5V'< EOk jyUt")H>"G(YNJk@e|β[& Q\RwS 56 A!xb )vEd3腊/p ( ^S.b^ȻC{yqׯ2')y㇔ I u Q'arA nk Xڶ;C3ǚmy8$x5$ H )B5R45;#c:EpuwUy( ~]t֗zM]fc\8H80e:H>LAǯw2fͷ{o](LaJ Jf*8)jnM#|_;\;(JQrL065(uȧy:H6BvBF_7!/M#,ߺ! (%Hd|_;}bl%^ÚIgQn Z>N$ XM oj8"aЛhJ jSRl$ nP| 5GndX2#W[ck[ucXI}Ms(zM9zg# &A AѴ5EWX|_4_QҤ=2m1hR$ N%I߽S3uoB-"[aNG*cjMzHzLwg C$pJa0!H6\ `mNsٹg +ۗ%Vojݓu?7؅f]yw}}8;+"X'֞*yOߜgusy4K|gt?.v{h?nOΒ- t0]HƾVWG_~[y' ٕ,֨_Lsۇw]u~#35u7}{{s9K;ȣ _^ٻ޶dU1}w$`gX=/b!Lwߪ$Sn2%DY]뺺0mCVɺcZrИn))9G{G#'c_! y8pVQhth؇ъeYmګhL/XEV2s^5x !ZUc9e6JqhfCzSӷc wB J ,~9>ǻ.`:1IɗnLd]>^(8(8)˲?7е8նpGebL4۲N}Nj%RA(/v)"\K}]^zR%0?RIYW3QU+{!ʬ0h"ifc\$Eb\"!.gS Z1+N,ZmOfv('E=b{$8&eK;i79\˳{FrwJl6;&xTk&!JS9hX()uNN-&֖%ab`bcX(*S"@Jk8[:1(g&=!@X(*Shhi (KR CP-RQmoGT+J~+A5ITd\}~P¬Fۨ5fJ@Z\Y90%]|hc/S< r[Cò  a{}Amg+0fm wBW qK}ëлJӧz{: N~V-箢īOnw2+d3M 1Br&}@fy0=bҁ1H]ez~ A1:? nǁൻtZp#tNB8n^>Rw~~U3IQwo.M=II[,-gu2V  xILKQyQHV3?TLHM+}?sT2tA|ugַݛb3R\ ιM~qxWKsj`/ஜS4,#;-hu^%۲`LX)IJ(3)p2˔WK*s (wحx1q+ArSvL@-ܔɽs5Uj2 \9VaVsH&)>imO~t:חx F2{'Mw#މ\C@޷VԨu0Ź# SeM5 D|%7f5Ig8(SpݺJvkQ aΐ0(v\g?~ ?~sBx%#X7pTܽo+JUyO8V>2=_5Nӄ x=bdX>|.\؞4 %dONaR7)aלͨaZ۬j<V~T7jT]E,Fa.i)gÚ00k1dFҵ4[Ɠc֑cShl+}mr(M Fֽ{}Knς0n瘴`tn9Y$Ak '2aU+yq*[܂z2B(Y y5}XyzGSӑٯ\S?aeG8Ӎkta?î50r-oV VxlYC0o O]Atn-h95Rd,QElA*vx0Cv9z"wmqMH-8vcKt]c8u82 2N$+Khrs :?hEi[0m.{kv*ݛu/.a-o0H`XlyULjzv΃AHQBDadc_&rBz•Xr%#<%"fXykHiHu$Tך]KynǂDGK 2Vzc$ISrۆdvj#Wf[nkKĈϾ$f_=UՇ ,7}^hQZ*,Zd2tIZ&͍e*-`x%++pb}#Lk.Lk fH\HیMD&Gi"g(MF.b4Չ@@m4_ի긲 ?|u*Kx]=}KA.6o9KQ';%78+XDʥE;e,TF(I4-IVr-ϕl,-/(/xa:9Kڅ}V_p!GRY\vzBﵧI唺&FM8r=[B:-'tZW^VE~/[ ;S5k_yek&. `W&wk2Fd!,L͘l\) #ؒ ry꩔9j'ikx Vv y; Q5<*VTC7f)*[/s8SAsѳdt=bT C=2iWmY_0F4f=3j,.kzPUSp헮3(]i3Kg7W`3UTjXC5/N RL$8F-Y~~c[rg+TZuojIJ^{t!9p:tq D͟D"MjaR*5ոg-)`2Rmεh\ 1#B$XB$׈UjsbS6w$w]z_S7ZnPu0rIoXv٥g{џTǧR4!ZFȣ$W vgFqPikF (4-E eR]C xu&ݷt\7h:Kn?[l4 &fTvm?~eUL,r'LkiiHW?/;HuR5~1f/ jR&C0 $߿CNu&vyçVqJ`6&_Tt5u twUkkj ҤkUHj:)Ӆ0B]8WS PV廙Co 3/`{߉v2RC ~rйa{5`oԊ[o\p&R[ޕfyw37,7TͮQ-xu:ݪx=7Y\TI;rW<| kV/M@QrQ8cW-@n[f=4̚\)7 -qypH1ҥTٻ޶WJHꇁAb H ~ۢ q9MɶεMi!si|ɏJoaoymo[swDI_BsF!kńgC3EJ[PtF*6^wӻ=$ B}րZh:HFjj[MMljᮒT3X81JTww鋴nAW.]΄yL N)gR6[oc0ʶʃ]Y`c4d3ӕmI"&7^!*FV[3` Bn [b 34pȱFsPxei 5's;"Z ~kjo@F"& %M lD;>0MLm͉ׄwR#| B:>nԻWEjVR-3 ;IuCaz-HcSФg # |%4 lzGۖi0թuZ)ͽq%pP3ƅũ1zy] *$t,ClI2HHV=_Cޠ.ljKhQL9ZUYH+AAS^>'+dxMXvo)d`=HX0nfAZVÝ%84E]:ej1yb}K6H/`=r2MMq39:Q9˵eƩxjpUgXHz +I:UL&S)yjABckny vql05b1Ql'0KO݄Q])̆@պ(̣aEUsi6v<6.h =.Ȏkj.j] eJcW%<戚&hbZ3Z vtΤP*9PMΛ md0(/US9Y0.l+3{CT45BZ{`dwAƴ {4!Y5 lWk)([N{1*8J:~y,eV%FAu@B%؆\ hժH@ bDpy 5:sh*;z« =@0ol¶ 03)1ᣇnH69Y!ǮQEAepe֎e7zj5NQ nIb6Cw ӪS`w/VWMMTE gTm- #$i>b*2]֕(U"iC]Bv:rHaaލh ԙ2/Ts}b~(-#$EA}5AYn냵[n@1Yl,۩ #4tEsI Px2V iulG Rrס*!,XЮD c/+ )Z^U}܌w^Zs $w.yb " 4|sv8"V7L { *:Tߚ\_uUvA[`-իo-^rq˃ }NkzhO9vgz_>{-@3 ?/c_C% oCMonc:v\ޒS@cclƩc8C{w@/pN?SUtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLQR-9uZSpoƩ~N::DHB0ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:ө3:߯St:v:f:c?N4n:uN gLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLtLw|ׯ:I4=jl=~_A5ݾx`sU+ mFvv~{}r+߃. `Tll ``A{X ;F@6*رv#`6EP筤խYV Ʊ`H|A㪯XL/[,"[N O\;. mѺneE"O[gHrA4.xn!K'`ǃd ر!m7VIP6֑ `6qYMZnDh@1zwf\ǭOΆ'\W_p{j{zyw7ӞCލ0~Sr %ZrGU^OyO\\Q"%.8%?Nh}4Km-r5\^cW_Pݾ*#c}vv_4NNw7ܝvZ^zaܿ8&ݾOޟ73nz?W' Bg)|ܡ'#%ukmٔRf@˓''}O?o<kOV1A66+F7v;;$c7Ͱ֨Vq63 ;,jUܸDKmQNn`Xcل} X/X|;Э}|SR=,[1n;X":l%pZFqVFțA8l@n,Xo8y IǃUf>q:c1z>} bRg#7$# bj` ~#`-ǃe>l%gEƒFЩ]Wǃuc#nV )x<(OK8B$T?4%2֊ڰȲDViI|XQmo/'r@jHrvgl}uyw8Flw[/޵Ƒ$_!?-żD=b`}ȫHds-YOT&d dr.Y]]y""+Oh&z J&"=o~($_ #+31V% (LEchH8*fb,3vQHbEA߼x=!~.}.{ijr]6i^ !?ʝ+ +1j,}(}ÜRˇ7"ˇ?]2 Ȍ_e㳯Iթ7ݨD[%?R\+7sqoiu|N.hG[ڕ!;eMo*ekC/H8[/ptZ//wQk}w~)Zd쿟l/b>Yz=\{+խ1_ q{ Հ۞|)-pl ~+^7ЖuFq bf~D×8y/}l癛3Rә׆>oG3<.FN]` i,R$} %Q־Lk,v'*zltźs$TC:*(-w`!Y쓒F;od81IXoT*|Ғ4B1{eGKI`%`(# k[ao6[礼T:C\*p* !R-^A П6jl$R 8EYS R{PGg=kMlJb?'i-$>ҪM7t3* DVKADk Z[MxrVyyv)Hb`"&V8jU%|}pgVT$%;#=gg2kȻld Cr$%ِ֥-0 ^|  ^ٺwI;e Z3eE,FN/p$jIxH.GQuVDŽHٰhl)Me,$S@l'pOK&8g#a,^S-$,$+sM 5W.ʢ6iMH4jBi,NJkMF*y!A_ &,=VK I8mHÊ(k5g<>]*yIV)H9w'q^OJPD 1RBJQ [Ycu$DR%Y9X`wR1j,)ZzˮRJF$ Xˇд A hHu5ByֶW9K]( \*t "JXq9nb-9*sLkzø&:3epTq̹%[ ^\AKbSߢdF(W5ڵ, XNh23O "%wD6Bl;m08tI%4CO|0^&9ID:` 5*I`:S-u<$X-*)lmM%aq ,Jy N90VK&56"-Y!tqOVyrZ9T ?WK-# ai.v;E3d#Hː#eST0rtFQ&t{ 9/r0&#^#Ih ?yXCLo4%$Qb$*I'HuDf9$AX?\MV^i'yᰨ eʕN>CMʤrqAM@ifDžك^iS ,Ѧ%BN i5bl1Z^ABkpZkr2`Сn" _>RgRQM!'J?L(fL!ԨEKpAeS.354'xu)׊e'p9r2_e cN#&*XmE*qXד ELVQ%ZE5,G aye%wVYٴe*0mA#53*{^feifP aUӠB}J@HXVI+ DY+1VSmPP8&$6#J 2Y~t_Jo5^\qb!5#?)V|r湯y)y E9ԱLDS"av:؝:Sbx)uNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNg\JJS5uz6LX&`x;SdHG:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u:S3u^,Sm3b#3k\:l3/Vzי:?"S,NgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNgtNg K0vp9/gW ޿~BQVui{lW꿭a ~ޘ.g'<|e <}BۢXr&ƒBJ31(\Z*`.nj@߯\]4VJogbTJLU<_D31Vc|0cI/LWֽTceC%Eq)%HZ.x0>W9p_zsޯS6-.NP\Q|g[gggp̖\/2׋`=DJ|뵦_ǃ!\,ܱ'=?~~ VSws}ѭ~O29'/appf6a? Qg_^{4IA0#q\悸l1/qʝ(A\o *z4Ӑ6h䜐`g0z? i@ELg1R6eW~ =Y@/v=V>T^:nrg_pKnQzFp+6W[kK+W? \) :Ќ ?`=̋+er.p$=wdpC•6\`l ~.pkIK+z1#iI;vqͰT4gN_xogGWhӪG7̷)Jvy_Ã'L\ߝa75} -r\v'wOytQ~|۞\ #E< yw^܍ɫm#¢~I7ŧws*|j)WM7ιV?Q,, bGE8kg;K<./?/v@ 3n%l.>x-wNҔ;;7@kCFׇnaZZ wvE;~c-䨙ToTb;>ON bx^ӐۤY{F# ܇H{0>! 6.-9W=$GC5,S=U]Slu•ep5ĭu@VMn92?n![*]ce(RtaTʄ5nFkvR⣪mrA풎g&i1ENH FSfi+ZlOtXꃾpl/MZATKHR&]erò>tƁ>/l S?Mji,n:?yvv vg7k?_tćH:,tq<6PPH./'_W/%o9ztP}.;FA!}+ CdW:ݔ6&>hx@,-X9 |7y m+:ԎҟMAck~A')owAł .d8,^2=?FgҧwEFOyvxQZR)rp1c.@l:m>ͫ,Ҿ^՛{;3&yǰo;ó~YeCm ]^Pi/64Niu0Z|Ԏ}?nuUt?.Ia⠆;LR40^f?7/!Y~QwB9[䏿ŏ#`°\EMɨOo&wGQDLnjxFW⤞e5胭BYG\m>1gIn2/d0iV6w3 [lk%'|SGvVUˏ^NOWD YH.qנFNm֢%yj@y?hdDy1[c>FUj}6MV&Wu䢏F|u6^Tz}X[OB3ԗ6 w~2D+ O2耢.H6лqde=Om9C_βg_S:EF ?.Yj^qWvv -qĹOSKXN!& Scev`}NLz t|~:TWbn`C4||:ۃ F]YsbU iZ/ϡXV@G_gt{&NCIklI"zl 5\+0g|mr"0ʹY{y#T u ^B%D{ ^B%D{ 0D+୛Nټ>)ĴƊ}ڈ3D{  @`;~M?zd!AVlpP ܘ^]P|׳M̎Pw}m:iA3pAqA4]RrڿD3hv`Sm({jU74m5 lWa@a-[ZJNP[/j8[:^s d.%NJn0Rԗ=^%>[vEkТaXk^ 6j Ζvn+Ks!$zӔ;mGK(7W9^P*~lV^²G#JU3pDuF`)2a3UDtxczOoae;{\1b}SoufqOZ9;^ 9=׉*= &͉ls1~ 3;$73@DCnm|Gw740K/x yY@yH62 R8b4,X+&u΍cWNHo{#׃k,Nl:^2(V8r|A7h/v[}5l/o%D)Ijusv2οO`NKoX/ŠbŬx56q:A˩{'wG#'rM@Kqx16S cbأJ/ن=5jGEwӔ}qxߖyD_Vame| jj(^Ri=z{Fw78Վ$Pt#@NYbt20ž2їk/ ^ťNR3p"(iFH) %41NDm82^̰fSFs$ gH>`bNX'lҚ={;K`uu݉FPNjsw!#ERq,13ʦ{ eɓ-ږ-H[B=Gx]oP}UBRO\ CC<di8zn5K(_+= 1o\!`ĢD\bL )BzFڳǚ%0Z$/[ޕ?\*ӛ8)u8y |eD >}Kpkgo("|Gb# xMܒ[-[ޑtͥZ`9asCL: Dґ d@Zlm6-JZޟa7&jVy1ϨL%Z7"poՔgQue˻gӍżk*$5fcQ&DKƢ={4$/&y$7l$8P>bQ\ _:` fFI)x-JffKAD"|ZFP& 1 mzE;o?1ڥ+f`Ly&حn"Whr(ۺU^a5$N^GBZP;1d ƈ=v(pv_SDe;R[#B gy$ 0MP"&7hkhlyWNt "v+`E1! RJq%@:1)D PDE˻R[ȏsvTgЂ,EF%0^I)Xahl#< l ]iٰGF)R| 4He{E;W$9XruXDg4i->CH_ N@ǠB{XFe˻"bIQĀیr =HBE%8m"=KF.Zޕ=qc -+pL䰒(R caWT۰+ZޑO`>6.1 >Q܃ ĩhkhlyWsc|iaE=\"8F3  ZL0!sm2-JfņGf'HT3$1HpN48˖w$=%y= 9Y憽qnN.Xl/"%{&CZxQ]6#! žC, 3hx.KI .1u[e%ؼ{htbN,ZޕJH)-opk>q){tV]])?^N:cDb@D\rK6*̴sޢ};4-@ڍ-E˻R\Wsghq H ;T=w,Sn>i-S.S䜑@b4x^*lZ'|o(lQخ+z0)(_QirmI\Pe;8vv %؄`pSRUƾyNu#2K~372~n`b=vѤzdi(2'E4G:-āO з #Gr#hk9"i -/"xѿQ/uԸ'o5hﱯ)y$* .g: 5y*;L<\w%C"mȰ0%z07PU^rl`Gs&*}JЈM̺F;f]+ə鹻\/w7 rBKⰋ/i9Cm`ooE˻ZjTDQn/|%4J;uh\v >\9OЖD?1hzPf1;\|N% QeR}f m+]$أ Ћ C4:2"Ql@ 'P" ˂Ʉ*&^t;^#fQT7cPkEsZQ?4swe뷁zAJH'O5D5oVY 4%g(?Z9VAՉ$C`z&&Y|;Y+{<| aGȗ<٦m5=:vG4:%`-EFcw#Y\(ZޕLs:$_zտg!G<ߨnB={ hT>:t?z@ C1/ \ne&8$TI)Wb@7]iYv,!8e>ynzPe*[Y'xijhzv@ʯCe t-)ϻ*TtQMžSr ΟIPr D>t]Rв91lZMr!%(H'9ֹ-/a[Ѹr0ڮ\$|޳4E p7P)3 L>xFE˻|POC0Y ='d~SHQ[^S͂'7 +Yޕ=,#9 j3^ш)N}>TF\MQܳǀg)jĐhn??WL3uLh֯Ua) Oq0v61ur!t 0EK^aML"/`]ΧǦZ|,׌E{p}A]T޹[*Ss=EI`nZ3gqG| p!ER$X$m..v'Łim-Oa{J-KG.UQU|T~Izaba:h8.F 9/D΍LCv3;-iUt4+VA?怡Xi.E Wꤨ% \svZPHCT-͉_zDJvqU7Qk[nwq>X6|O>{X,>*d=B QrW[/,չ#(ynH!7JRkiBNQQ(lAe~8L#fW2J:0XP>,uuDyNt"cf%'gO/TT6 CrMyf]ƈd#Z%&kW %ӕfM8Ozo!;jeazG1ď¯*,}^ @15jZ?0?vlq::o40mibhv$Ò}R; C KnG/Ifhi g?;/9faDwH֒ӉP%);IbՒDwI0 "lws-;pZA=V>O06z "`.Wt]^^d0]";~]VbwЫƞs0#є-VcDAIM}Y4uYQ}G;@ o1]kH_V\ZZDO6lKDx _m?m3 Hi Yz6ڼnV"^-E0qK XVJ4Ls[,gofZc)̤0%^5KdzEޥGuX0ϴod0nܴ lOL_}|G0 OT 㫗; |38loNx!$*rZXdtݘNTzͱ<hz0c&TcgK +=1F!(I oāxāu^Zw{ 10]aGw) Mh6!Q* BєU{AY5 cG@L-8K3*ș#/0߸{f}bK1O}oe%~wo}`y:\fVlYn3<5O/TWr_l5zt06B lQG;V6bnR,&n9 o])a |{Z>y2u~^L~J/7Z}&td~}_ݖص@ݖ~叐 )`0& \I"Dff ǟʟ?}3q8&U-Tuk?-޺ϋya+*0v6} dˆE0/wَo7>B3.>|A$%N?;[\zs&\L^Md3_(v䯻L LU[M׋עW~-_&xaA쨾e̷'ɶM~-f _ޕm̾u`gJ X4h%WYCBZ&ҒYjKMXé(EV7>rgy UPa:a[xΊÃԜ637/I'Eyc&O~>)z;i`vg˲7?O5?=nn[v?D>fٺZpxnJׇ~V>z?{HS?K퉷=C䝟{)/vn[|o(کg?Tc-6Şק$O-"@~?gf/|ZO_?mV qci?Xڻ3[E]aEU|?d2/9,๝j?]0w'; L^= ~{z8wcDX(O~Z @PwK_()@[l]("_NAߗ+}TDOCo2ŸI]\c]`W#(SMC- 3`7$mrsx;d=Vv @\N"#Y&S"}\h}r e@<e@f5`H>F+W;zJ GՆ8>":ɨJfP1Nbmx$˾-ǻA`4GOL>}/m܋DTSt;S8< ƒc,, %aL)IN 5,'@_ԑ,Ϲ58,Grގ!c(mN^էkb!`Y47*QqTG*bofPdӓZ=pR.:h8sy)uEsUH`yjh>~#˅&kKNP#*^k)'1 #L0!tOFr#pUn,TXT{ erX,3-N 4_8|M[Y.U@~Kك74G{Ō,gvɣ ,M-XU!Tr^k8EI*xbtF[9^"-o7Kdc"K*-`p1n0coP]7BAE)G,lj:>*^jB0EkƖ0q05cÕ[AQ)'DÈb]_x'L Z(.aE)8$Ȅ~z&DXP +4rsEjC'Z-Opgs MB{@ ·p2ܜŧy'htM=AaP:F;,DQUDF6{wa70KĹf2I]T!*USfsβ]eK >wVOIǗumz笠k6qSSeTLDc]$L&Vg`cn䳇eEkt@= MYQNR;^]G~&(a< :zøDFՕ~gZ4 Pko4kOpA=/]zdX/PCY/m/rSLgbb'; Zv(.5xW l"΍K-].w: i5ֲ9c9Kᢷ=l>Xơe]_1|JܮJwk;xGhjc;E@fch"m红oJ2X PVW6n 0clvgyuTo:c''3f /r~}n3`E$]}K}Hp~dwtCW%z9~LC٫X&H}~h84.G~e5W)5[!p F 6SbMI8 r)yR]]~LJ/CZAѮ*W,|³<”s$XÉp:^Kg';{cpXREcOx[ !A'eA,peyn;Miq€r=`:[.Vd#YGErw#Y{ɑ+"c,]-v7E@6I[,y$j-FVKC,dI[tEB%luy݊`rer(Z4.z=;yA%1pjwЦd<#A8eA"Int[p$FS0EΓmA@(c'[ގ lotиN0t^?8Pަ(9vQrqS ;쎻Hoo{yM`vE.(1:h4~씟Fsa>qu^67K|{i`4_y].$ [DY:=K$g2{iH7M Yg$aTeneї7,QL;M$!湶O ~ԞUWs\8Ϗ+WVl2FB(A?vOYR5ҺD "2آeIHU" M](c}4(\KjQxփr&H 3"l6@(Zg1LiiUDJpNˢ|3z[hс?v{{Rû{4.8+o׻Nk纃%; exW YKg޶%Z /BfJS4.U5LyW8䝝pu߿Z)rP>~贲L\ JYW/0CDZ}:dF ~vvV^gPXG뽤^eK4cnM3ivz:&O˲g3_ܧx.W- \,n[ C0XlEI^) Ƶ/k`PWs]~gdﶍ~(iOgOʫe3~vcvW mčr4;%7 I{+LzU&Bm8)*!^L><<6?GC֯.f"ssZ˭Cul0^rΎp{x S zN+X4#_&E"*UDgȗ!ahn҂PlV`-QǍBeeDŨd0>pJZ(?W`GSF~5gL~z}m.++ Z`$),X(Hb3iJqY%"-$IQx(UPqs׃@y$rz No̗N*W'sZ?R ."?2SI#:[P8f0M +͜{B$1({o mvZF!N ㄁@j;{՞y27K9O-cޢQm] F저a?ͬr=F*%t8{ctUk+'= j+k(>u\Y{'qg_||9ѣϺ,:_GkȴsP9(vu;^%O߉}u-l l+C53#}zs$Q ]fJV}t@/m/\ћx'#ۥ0h%[Pń!ԝwhu6:_[DH^F[mkX{{n6+uy0+ߘ+د/3^H}VGQmpQ1C9ASnT}8Hp](s~CJۥ*&ʽQsDE֑_IQ.Ϸ>=>MG`W|9&{<`|=O:Tk!}';$!nt>3~rƽCTɍ<3C?^?CkNД2գATD_fpbtx\_hiV2A-d˜i|桟fl7?#.\'ʒl)s aA)$"$xHV |0ȃ,u$^%ʒ>]Yͬq$>u'h!se4DYzOl@zEH$K3g P]<鞃IЅ9zY*`usU;Y۰ Ū%l)h.ǩ~T9zK.ba#[vCrE5Sù<\^s  k »Լ*s+.:J=VX@0h6q'OWΛCsa G[?dzf;A}%<{fB˦R^Z^x&%{bN@AqtHNR[][ KF(d>`f8%64Wq]2ePL< 8 [}if 27&Ḵ>4\)l>M,إfig~C~Λ=@̶_:qZc\qm%1+n: 櫟dtu?{EZJC5iJ$N2x [gWh8ed:xM?.?<Yq[͛~Jm沴ޛ zWOKjZ*eġK-do䇥>IM- ?:DuS(чf\^] >%7?2J3=O?2n>P}tF[38/"Y| ܨ#PŚ?<\4ֵ(hCM ^{,=:suR[)gĩsAUQ&tZ\_t2vC51s4gqG9a.ymx>&أP#1 Q$eEq7z08x %z9xP' 4iW2Ԗ B`}qS E Z gaioYJ#;)-T=t&^ j.m/DRaw0qs=WӔ1gx#6tthݻ>sWּ9IIrH`gKBpEQa܊0 ܹΰJؾ /2-+L$ q X֙R P50VXJviVy4ؑرֻF,JJj払'y{.*_PTRcA"DBS.rFX$qNe -Oh-f]3Ca, ̇q-F>F9JC,Ę va9j> ^:^zK|g}rg$$ !X* E>9-S"ƒ`$?MY #蕣$K6Xif"za| - 0vG·q*cˆy$II0p:r 6lDܕ Y87 ptI 4Z-Kf+Z'Z U9=ݬ'<~xXAlk؈F jgقK5Q^ЃnT< FJ &+fH6=_.ɶx=^8$nP+j7%ę<7VW/޾(U] 8wy͕7P2Nyˈt!P$C>(C)~/Tر|)tĔ,ޙn}pV<Ԣf86+V/Q?w+8+P;GGnQ*2(ɬy2^p:@UU" LHڟĈ /N amLõ@WmJ-Kk#3߲ sR1㽨Qz CmSQAEb\qao =;Q^-=[=[zJ8GWB|DC-ŴD@ W 'igCyk,[,ޗg?QYKuysD#g`kzaZ֙WiglY8!8ɫyy~zwN?S);=^=;IL1k獍bxox6eu>ɡY M"тYX`v!E錱QvX̪d}y+S{eb_?dxq %}ΎPs8שoFȎF#rގ786iVhC[[v[{Yw`NsYg^]qPeP_aB[/8^i8= rYXa+c -O9g.t}K߰RȊ+36iZ~4!+2.Q+{k=ht98^ZKWFks=h8?l[qBT6~c< .coL9^.sILhG8{8$ubYf5[c8xWEQ7,opl'緶z hֵ&szR.3920 F:rNyI5A'ǚ]+*K@8ZGM." o)N'%NJ>12.8)A XW OQPyaqOppݼrƳ@i|xڇ9c׃ej0}6ي[>NVԞ?f'RyǀncHI M 8N~_g&I v?/ɂlQLq: ,h GD%I7ew874SLo@`|3wv`6!}xΖv))x GxyޛNqq[\'~wܾO"}r 'hlu*:[,.mCd T@Q0iǮ5ؿEb~ x׈8 Yᢖ̦7NM,TOK˩p` ӎPkfNrk:"CA 8$\h Zd(>HO~:};#`noX!aީĈ| 7w*9!ĂUn^uRo\N_<[2K-^fRolg"pŲF(!M ,v ֨6iRYw;d$ Lׯ2Mylo8cLXXI@pmLp"<~ߴEses=n8l6˥]m9^g֐㼱܍,(~K=RݻWs}Azu.IW%%omKZ~<)G}W~ov2]rq B':s㝝 Ǘ`>SHRJ̖9& 2 tyz ]]:?_O9ND~˕Z -YJ 8jHe1VKcӜcݸ+%-JpO]x'wKvAԵK kݑG^Hij51$@g(% N&?=|6j&Μ!0vS8 <`wTŻvW]ߔVt{ q&*$lAI4C]bqzޕ߲+]Rqs ӊ}:("*)< #b1yN ()d-8u'Qj=`@ax qJr g˳~|;pTƠ^bY? ~iLf4- V34> g6-$R|K42gdgSB0bQq50#b*DLgd3_HN&1r4r=u=`X:sD*Og !*ِmnDd"Yek ;M^ͺ)db퍧17qJ#^ €qJ *B3>.619UHj*ȂmBtʻd6-=GΉ4~rG0]("rڂ$OQ/8%§37<5_"d:Fk@ 8wX }$)>*`J|2hYQJq_tM>^޾g t`,debAFQKCW9q_<:XmP)&xoVG8tA`nox;fv=ӥXZ^J^ >@4J{F"ahcjڻmJh_ !,EDً'*ݿy`lg ցy*c]ISEA6ucN8zK޹ F>=.;pKq7ql "TGRI\@8i䰒(R zcP{&4GQ\3]Ӆ7$FB!`u8hA&[dVgs[Ov>oq-4£a;NE;TȘBI4戁HQtH%5=SNm0O@B9ev\=ӋD!7ZSBZMb%j"bd&KsW8[LϐRS^ދ(oP+uJ`AFłu.0"^Js6-~s*=ϐUGUڢ3;pK^oކ|k)N3WNnD &%PB+CTI)Wb6@L zhnm8C*Nu%`o6ՒL.NqA1y KR؁_H\?􂧈7Y]afN`ٽ6y>-*فgtPбaNsE.;pLj=V[u"G~)֯ZDN9"r_b@bj֯!DCZ!DFv1'*:VSA fGc[ T=~]>#fFe3qh~|g;$XwOD, ; .ܮsK-_0!eEDJ7^ƦkۿXPn4=]tiU$̶ern7X:|!*kY֜kVvzP;š+j+5yM==cc#x~ FA/99׊yJ=L٪8/ueC 0?)v.wԟngY;LҜ/6 MmA?|z}`g633'9W}Ԁ6{p9RiL5na2ǡVϙw?tbme5m"r2>ϳ)So+i02t(S#U^hEvˍg u ʟ?γwSr> q,| ~a2fBǿ}ݾs$$gLA8B "̾Y?Ys)>y?;6z"e@G[t@* `i?51I1|(*|&3K?7꧃vG8W! 1vX ˢj0I>ζNzL wFy8 *ʰ<+bJ 1suh4+En~y?L|4r맳A"܅Fa5u.v,o ȚMg h7xHAtgzqAcjRv 'B)}슃7e T|xcȥTV$Ϲ#JQej5F#wM&SF+38:8[͋s'gSM1WK걽֑q X bpx-SOj2 &@yF ƱR%9"he%[GxR+qHy}Z'q,RxF"ANxff\mjHgAxIx΍(Z0]}7V+~mj^gRg9Shٖ|AN k_Sy),D.7M [L6|j膗 uLN5|j2QApLRi\.roo')LN*HLNSPPHn0+-c֡V L20 ߆YHC'v~?  gFd@ÅU U޾j|p!V5÷6m[>N6djwn2a(hUMЃ{8 G#' HHoݑ˚ |U`LUSxd{ٹtpp `&TFl*<|mlp͸1Qֺvk(VАd1k $ a Ύ])'RD`g)u[SO2Rp+m3Oc})vpə0FgrHO16b@?[EN9*"pI mx뼼ݭ܄8y#5M'N+%\0p{ 9^\؜ UeXlJV71Bd$G% w|p¹&HD;zn~M5+ײ0 85\qX3ULdRt\]$!h4I[5l'R &'|@m- yLـ*X*$e'Jױ*x,c@jʼn#Vwt 7mMZ1y0=r<10 %X=ʉ!K^ZKK>E&)4NRB+eQ12! O Ws3O?G=]Ѻ) E1IK|zJԌ\]z h=`@1b.:{%cm3w_ur@n $^IN d! ˆ \mXP l0 dC"(4\ nϥc3+ G1UЊӐ =5VRJ㘔eR 2IеjYˣ^5R-6C50+5bV/ר]>NO/d Hk'5P\S YUi̖:PBt V֏t9fM=iX10B7BbG&FQ,誺֣sk1˰(<>vlm h:_N&^rN_<6^&_ZA٬n^oOվAjtzRN_n*N]~vǼ=n<~}|0؏v<"7#sMSɩp.s*=8h딱 A0i'p*&6IUJVdaBp}!Ē; 1`)b҃TYV nN +[ n%A96R) A!0# 1 ҁKIVp*eGTSUf^1$ ׏(B00[ 9.(b͊طUTy2de0<^.C2qi7Il| 1A( "&+<R`^M%BqE{b`|]59Bc"1nP=3g7G}a'kIkp [7jͬ\WٻrvPhf)yhnb[mᗸ|X#m~@㿯Ob:Y;/R>egЍed9c7 u\sR=5+mBcKQ0]kqXXɹwpiaWv0<ж+,ږq[}Oq";BBqT(˰EQ8Lبt,X%eYDs7}[|O2?].Ž&H^+kݵ.R? >qʒ}YK+eo'9'f;SyMjql1&?ٷ DjxXwE rM"/dY‡NeA/cIpe،| a9w\G~4Ze$L✱\< 2GrS(Z)ctB ײRIud/j-:|P͇AJ"Dr+S(v2ZAm?Ao>/v>:_3!roFcbab|״ĸVz78z z<.:[Nj9\fعڷ)e;iyOtFZ]g (QeHPDg0핋!8A Fd)E1Bm&svvo˱&2![Pe$+.*fPEϳ1T Lc}͛'7P=aH)ݎLc|4o}W|݆r2}QU gX-8+N[gNn9__ƶ !v}kw(*ݯl_#ܻ !8W`w3~?k1ZT5}pӹ`0_8|sV o/5E1rR:Z|k^wSX7N2G&wo~ɗϣw)0ba[W 8 h*~,|[e]+~!ѿVZ g }InF³N90r"[ .wOo*/WOVjQlV=w|e% }mVYmgy& Vci;l<k9`][* Є\; }n 6bn)XqL56GQ X:7{޶d <ஸo(ziz-^&5R=Y&%&ma8Y6XE>4ϺTlpdSr'Fۢgy\'=]qM8!y2BZIRJ8 `r&YV (pO%h;ϚE,=tj^hmxmX8ܰNolC29½SfRM8\IGd"ߑQSpIǬ >$g'A8xIfz1qXeF&/R(U5ܞ&:ϛ }Ԋ9 {{/ ߹)aAUW --LӸD祲ۚnKry:r߽dVHf/`>`nyֽ V/,P/;1հtݫm'l&uԖڶ<0'dKm:.t96td?YNIm{MW%=GtH *iz7W[§ρ]'ߝŌrX?_3r;Y}: ˦`nM 0 nK3ˁOfP(zQ@콼}q̈́RǦi"lC}b;. vM:L 뤷k}[rS (ʊ' oKb\>}6Z9/|#ݶMR w&"Q1>II," ZۢYV>;ݕ_>)FwHV,RQaXvUټ%^t.s\G7"gp6Ì$U\<B$L+$1H0J:}2Һggˀ'!՘㹫$}J)QѺϴ ,}l!iN} KeT5XF,N殼Խx4 3GSeBӴ&"0U CVx'V+k?OB H^ՎR3jz5w = UĘTzJE<3LH2JIHIjȂ_n(7KA5Kg(fJ&pO1rGrn;#Obrj\z@JBH4ڔ~ڴ~z BNQJdPxn")ǽF n1a)Ij5qAOmz$ dz^_=NTF-"͕rPF F#j"y!(iO0Vrރ3N]]TרQp 8UP*Džǯz {QG#SxCs.ʣ!4[ !{ Kf>"XL.g"t,}ʻOLR=fU}q@ 3'-7xfũTa9%N JJ,IS&`$XVm.Q>tot Tz5 a#h%G˜GhqbX}f~RXe`wb*CT P̃3F6uI.cRcVtgm?BXs>G}:ϰ۫vP—~;l› Qga \UJZoQ<4.NJȿnu ~}٤n{T*%7l Q!E~Wo\Y݅im偵)ߍ7[-gkjz a ~5ޫ$WVLO4֤1K8fNDzxyxIJ+f?-Kl,~Z*pi:Z*RkZ*.˘b[WiyKц]o~hK笴8*Q]-4a58>㬫pyi2$L|6kԪGs.F]iO~Chg!H/š!d3`6H`Y>FGrXnra&k F w1S$`:V ȸFdZ4]yFV\Q]n2;<՘MlOCzG($y$yɧW{ 3 9Ӫ?t 9owud^vPi~)Y}^H:͵ MrKZhq.GBQ ;cW?lL|T#;\oI難.QDsv)YWAUxܳ`Yg)@׬ky$e-.tg9Vb8Ő]wnn!v"tz;sZxVfS,Mc'G h~9q.]J^| zKfu8父]'Gk{Gۜ<{3҅BLύMaTLi#9b('%@=txb2&#֡ }̾VBH@EJ}ܤMP5HYAO,yb'Om/A ~Jj4LmP.6fFxɤ@V;,BC!+/Oȕa"ըuz?~ՑhD}`䝦D]=˶3BãAN}AwVҳ>[(EHm7&2Мr zsU?\窍<ӟUc]#6Q3APg25~&vXŜvXe0&em Uó~bU`0|itzY㙜bçVqÇjvu0. 5I뀟ML46A=:ȶ#}Hx4e&l>0}> #E>ODnmE^[-Cxgn}̬*}IIZ .H'giEcd^EfHv>rж>q|QuxX^IxAOcߛzHҶ؟{1N 9u$ uW7zQkԥ@k,ջh>t_fѕϢ]L9)F&dD;E7^Dh~xM甬+s.pPAv42.iQN-hӯmsxMG"J`Ep @On,2F#(/a`Ut$P\PΨT_ݼx{xm @8B7/.דh2b>,Y~c*Qޤ6JYO X8Cq@'TwȜDЏ R[5B0Wj:ӰeHm b/?ādՋtv{k^KZ2d>p/G#7eY@^A~8? Jn1?/&\bJRD R~*rڄ@&$ t"3oAdzm-Sy^gB ?B ;[4`ˮVl.(ѥ$,ޠri}Mn?sqy'rNmb5ڹ4NaAdO[4yWgKqU1j{S_u\jIYZc 4$4If'L&Z(ncg;`MƣAsnGWX` X%|2dׁ,grXI̗g*_&kpnQiB7yj9a󩒚KIKR{0Vqަq׆=͡ҵQx<_VOxyF^EgweH}ڼY`z藝L?`Ӗ-K*):7eْLLw0G2.q2 ˀ~+3{<9vd0HIkB;X`^ gҴ xˁw-0ϸp(2RՆIQ`R*yB `5My{^XΘ4C0Ҧ()vHPe%@}i=F;\Z~Zo58lӰֳ:^L짻*ij<&m,s\rYXwcѵEryxv P]<c;w@pe f\e'ϯOw;}sSu{ nSy q~ ќ0u(΀;@Џ2󋓧ƚEm1ݮb8_P Ot'P.3sz{SUƧ kѷaK">j0x/'.6긳^ @3 ?RI-?mtPfg>t[&C{Qaydս\/*Bdai/E ə Af~/tJ87e1EK\FF9:g{~`{ѹ2S##GID &`>ka3e6~9c~8 ;}S>.͉ i5]a,F9 4-Ǡ-, wBD#.JeHJ2 :6Ү`n>#ʏ`{HFR x<MmpaQ6YC4HGD3sFc&Po qݲN?DfrHRBys̀DE\/Z,VfwĜW@I 3h.PHky4o{SCSxrYc@;=6-:ġ16[I 'Hgs vG@;P.Z?~x),^zx>8YuSl_g8 N:q ֈR@^D͸֛5#\}"a0Ȳc igU?ipg݉kQ:8/XԦ9JeAK(px4GWN[ޮ:?uK_-Mt\٠<"IR-;ei)|iHX&918EHFpvLʮMEN!=EVIB0!:]2@nSXxus/処vTv餛ΐ<  KL s6A:ڥ0cm%!==7ɹn40t Ltn.ӀY&)"SsiA-:]9{2Zڳe>1!ᇑ`Ap4߉>x20bW]\+鶛pػ$a:C(%s|JAEz#ibu fe(= R[,жn}yE{2yt8,-\bx ?\*%La}0 ;CA0i) 1٪V­E8()O+tG[Hq^'HT3$s#"F!-+jr 0>Ya`I:Tqy鼏vq VF,bXHaD j3v7Da\?| j r>HucN<\C` cGE4_i0(t~cP#0g3c*1$@`@L 9ƶN&*Z^d-GN.@hLu``H1 )ʒ;Ĥ1 kQmQ*D_}f<٠)X.jn UDS }دz#1N }t2'MR 0#,7#ERq,V6 5T(Vƺ0P Vʻ|T!ӣ11LADvR, ;7P27alh1Fr%MYh nCL/.)=18wJxUHe 7`05¥0K~ŞE%x4Iw4bwo8ISt2IjXtφPozc@jP8 XDpj]fǠ1, r<(BH2XdJO3c[Jݫ^z|,48C)v/l7$Hp .x7ret 8T4:܍0$er&EF%$ y/0C1 |~Sn$swx<GLXS$B&}Zg4(QBz-\vc0t֏7U)x4LjmXT&>^#߆ɀ.E5/O{Yn =0&a(loxc8xsbehQ";(0^F}p`H 1$' $V(MbTh>hq;BeZ#¤1B5RA!@jBa*@:685o.+$]HN/׳Ƀ]jVn,/Y }I_QST1Vю*^#jYQ B,yMZAԢx4G~ Sv7BhEa1e" 75C+uUx[8 q<Wd%xyf}ë";ծ_~WIWImmovNGJUHz,cTY!iIux1S]xۡHZ4B]t6zڶԋnYoOn/5)#5h 8!bb=#=m5&M.[?ۡ'Gl5 9 @x $'`h|cHHˢDQaD(j>BOS1?FM@0gz-BE%8ՄyLqԄM^5aMQ%x1O]S' -Ӊ 1,sPmz0-rIWrxMfsd> >q Y1%zDSMcUm"M$KD,Mֈ giA#Ud3EXo(SwCJl :6YU2r*=Mh>zZ,Lb>185ܷT6,lBLRI n0O c`gHDB]h M& 3Q ^kT:A.G:Fn^u}F1TI..)=ANqNyV#W2)y?ƸT4Y̗ݠYƀT -4ZP]ឈr~ SVݩV<<#7Zt~pВ$<ꩥv 4Fuҩ<cR&NBT) PJEpvk-XbH.jI<C#)ZL,n5baPEK96`*MEh?ROi,0r=e D<6`MuK=eO?,WFmڬWbH;݂x|SC9ٞ@WiXlγ.]!WP+-A"X3AHn!D:cptmYx2L޻e'_ֺ&h)?0F$ UkcqޜTZ_V_P_cs̢puka$Qx|I9W[aUgoC9W Vj8'`~w Y]FpY}X\II\.!m-l/ÿ[aNB*ǔ41(a˘is)H?dioײߣ /` [0{.ɗ=<#>5/4_'7f>uigm>T"aOsd*n> `/ۿ^ 8Vngˌ8NzrO={f \!\X>^~ZQ~)Fp/sui PR'vҪߢQhg(q"g&) D|y3t;`oc}y?Ϡ(m=zd~qf8+^g9NJ6uPjɹF{ ,w=Icc}>0uϴL/2Pu?rnm9ݏ2󋓧̚eL 1!f5IQ o7 .Cr|E^0}x%ܛ] SȲ+fkѼ{88 !ٻw7ř'$d)lAH$!4iͣt8Vq{ mLh!L:ϑ 0GZ t6'݈S^/mTH50KAO (NA P@(a!Z&C -6& "$j4a2vۗFPb#Xu 4 o0F be`` OT#4RX8x1#koC%qT{%sP,rK^jp KGH5 I'/%fqW&g6gw@eIq d4 ȸԴ0\vo&,rA~=+NAEnS͗EzP #NwӁ]@#C蝆A8ce۳&r_;U+*_4JQ`-s!s6ANc#̋j0Q ȠDRc&iXKAxj0eR$%,Hfu4r_w?Lɼɲb5=T`N4D9p u&MJRY-!ԋyV> I7W,gr<|@TrD[O\ 0G=Ij9FX&+V t Wow0 t=7Z_枻 +ʺ=5{?7Y['έ;`UwĀ6,_{l`?vHʦW{fw[9AnXhk< 1D^},aR2Za,"` Z[rc{Y|Zf&߂L&Jb%BAPICp`zty֦2tF4I wP_TcQ\\KsԚ}12YJUq4r4A_VwݱG腨`c ֲ[kE(vW^!PwOa#Xˎ[\{#Iھr*6,n% ?*(#gn]gZ%W .d\ +J\ *ܻKY+w ;y-} :p' ߫旲2W;GY(e@Fy%<ӻ촺9o@^^vQpm[i5۰s3 6*,>)xg-` @0F^,F_ُxKbʼ K%yR-)E+}|.h7@ve /+x9^ޚgC!WS|ؘ}~0F@@f#bt M+6A|$LYD+5s%@X)5 lX;`GyY1F¹NP ҅c|HGRrg(e"c=N5G4e(]&/6Gܩ،ݞCeC ijN?; S.#Ny\=,s-;xk4HPfeujb{w7Ά/RQ}-W/$%҇!zEK!PL9STrXF KJx,)zZ#PTbIK&zb40$ 7 Sh+ "°f #+4f6*MipEEryT$!s# Q!4$%zT5a z5B%{$Fl4)P kʬKy ʆC7U !|]E3*Kr1;N'B3uȖ)Ž4*0DI8KHJ=Jeij#cUFyA_*rItdJS:@y(CD[HJ?W:ј%c,p8zSIgT:ᗷN`TaJs l k#9M%1d#Oʅ/~qjo"3I@+#X…ܦsdu '"TS P>!M6+;/}ݳo<L6٦lޡJS>gw^2< e4$LcA~,t +Bajw 34\xo)xL*'ljU)JLgd!Y 4i% +_%}~XɅ80~p#2gp?@gYEKzϲ\WK{Yڅ K(+$Çw/"3PފUN%yptHo y@&xü."KMʕ!z)$Hsg0wVFH<__U7'%Ld:1# D[M%Wޱ&5L jd@FSɭz}oZ&$wNŔsJ3STR +E)h(γ'ꯏj{; lwN/ W0ќSF3 &5XOgeLe3I<|~Z~qr q$pj9A3?<~~j7(W{'A3Q| D;-_" W_֫ǧڪ@*@ JI@`axPT (NTtzP=nWjeXP]5Y̜3-Sqd-p1i 0K)ZXйyUy}z\{Sߩ@n~O/' ͵j}VzvbeY`ReRcISʼnD d j~L:RŒvO%E~Z:rXg@N RP5!'pNT  R::;̆_-O=ܻP5&[_d'زU: DT1ڱ% 3T>#eʋg ״u+?t86~ G{o]h*^0V -=v2x>gFۑ 7ٟgdsd.y7CsusqY&'4(AѮ3s<޻YB9))"4,ߢ獖|owq =z=9l}Crnc>+LDǗ:^/Y1nS*o.C:-Be$~2<P7pe1P1~ilb Z:wrg Yn5>l6Xʞ?lBbZ^ P}+wAek-''`g+?3]Oڿ᤽>Z]u8P[ry19_*Gs׳rY_Fbw6Sf&8ywernIx[̨n^}~vaӵE6zHʘv 3"S{"g5ȷ08'ЭP4yWzиF۾g*_w_˽z 9̲}Z)G@ov:dqB+_6f_?D*Ԛ$4E 5 tl\{ҽeH~~)忞{;e}o<,yV#Mt/2$r9*$s@΀@ Q;B_u 6K{s9nȷEZ3Ʋ2h9:Y?ӫlugT:(3i{30AɸW;{yXxvs,ZՏ׃t89*l^n܏$]/VVak_diknѺaNoaR(@0o 0T6/wԲg8qq&Ȱڏշ`r'p3Փam! ] s۶+i6T|[O87sz@(JJNs] >$+$ :S"A`,ŷ[K~dȴ^qF0fn|]#HeLJ@Tǩ|;)(遤U]!P0Aw>~ }9Jˤ'ϥG`9Bզm^=xʟjNqGϠb:vq8}q6)(r5Aכ/i``, &͛wS.Ϯ[\1,"xXΝq im|Fɠa_ӿnEPXں Ⱥ*Y4#JK_E0vdD=,=õ *cQ Vn,"!D ƻ !CKkEBc &;Nnփ1j<,쉙޴#6(2h)cPPD`TGFKDGCmgLEI?3,Vw`ӓ3gyd=w<}TME%~Qt̶X 8ʴhDGrdQƩlFnq|Q:@ &r7>`nƃnvђC'v^~8g/^.q?sce[nt1cDCciAfqxf47t &~ jڄWDg=],w`'j?S?;;*LGY郩~{y*nnV*1111 8[:=(7+LlLF#:9E9GU(ON~  XxWB0' "wEg./_a=*»~UhfUDlկ#F^NH8eY5r0f܁/ ?NJ1p"O"O.u PO*RѓY Ks纁Sv0ܝX⩩SHkЧnr1{zws''8u3j,Q|x0aJןD@J;pB# LlF,LyCA9#؞ڞ 5D5m?HMv0@xx|`$q.ܤ쁼_k>[t~f,6xԡ9z~ n ?'_cH=ߵeV(m&aBOx6 H̥ DvJV-@4qFVsЙ#gN>4s4m&Xs $=W/σOvxP(Γ^^VCnCa۫IVjLkN94q-%lMߊ@6dSval'}wjw̝kKDDD؜V&u5gtБ#-,TaD}۠_ 2&Cx]; cOZx߲zS# 4]wҗ|9TˬdRD'Q0F}¨O >a P3mDx:o:o:o:o:o:oyu`<8^ShQF)ktF)ktF)k4DZCT/t$eyݝ ^*e -s4Bڣ4B:N#iMhF6WiCFHOgpm,D- (Q.AR:Qu+DX,Ueɭe:ܶUS RPnP/e{˕W&Wo7DX%A`@?bykT)}y( O{5Nsۉ˜2'2CIih E>BDW<xHo8önHX"1|EMeY-<#$h v%!gMPܐ v$x'KaW5 30<;'C)/'D'", L*kB9gR Z4p",w5Oj&goCL$fS<*_fܫ˻go|]sKVoBe;2gUh P3lSD#LkCTV=z/I;}T~W*KpX͢CNḪڱUzY֋d"#j7dsXa #ĄCb2FnEB aU}Mȳ=RG(t9GH@-x l$ =KTo09g8\峂@(i? irneWJ.ǤdnO>G+mq6JyÞróR؋uc嫪ܛ)wPZeӂ]T ͹!H&tJ^ԧ33zu&%NKW> /Oוt9 rA0sCjv@9 #s=_>'ġ,PGJ;]߷dª5rVJ8_j2ʻkJ-ş9ԏhR#Dxg/ qO-I +c.1TMM|ۖsmE0 gǡ%^%%&/5=p7EI?=moﶠqGČ3OF`26~U2lG2WJ7܍.0?5Z6J^E-/Y yPָ 3=9Ep *Eߦ9N>\/zh]ec-;-tM6 L/󥾬:<`XI IX2m*@`28"u'1Q2-smT cnٚk8ego\} !!A{D0iIa9\NHS w1Lյ($}25+#h2Ǐmxea"-Yi^tM<+ہ(V>&V\VuaͿBw Sg=QoT2FQX]ЭuO"^hB$A,Pψ0+ÊDИ-cp$ Faψ ~3Q:t ^xjLoU@/&JWOCcZЫ.f4N3~ \.9>CJz{faՐR'E^hE<,I="놂ߗ[yw.}YTM~5$NZa ofm4I! >`TY'&@D4nv~h;9`D?3gn 4in1vow8Ŗ %Jm@% shexS@uGQS[۠h_=ux#Q`4^=;x_d@GnVGe=t:gH@[n׏ƌ=7 P΃?F A@TQvN:kxIR~iXEV<5Ns/+'VjQ16KDžWNtƝv۶e[VMd3JnS7WhJl;?/5v̫[Hslaq`bā&F_A28wK#-5Pa|WF-aKxV%7azȌ/޿G(-6#? LX $\Gxa`;rؾ8~ J5g[ LRRg(~$0*q_c]HWIzacp=:tD-aڈ8gsyP/s,Ӏ6 E18 V̼@qDDK#Η~/ `&(Yܓйܵ[@$*AuڠTrw ckD)G.|n^K⥑nA $F:]?R-vdKhC)K {rRնgU,s_S-4^Ѫ园E^ Xzhr3MHwmޘWO*R:3YONM]_-(o"']SF465< gqg|Aku:թTBpE;j)";5VTJX}b,a] sF+/Aj_)Y񡲜M] Ѧ.ŊVuHM`0g_Mrnr 8W`υڹI+ۅm-* G\`kϭu{&dENiV2qi8̇߰41dp:TWʆR#M$[]5S?6)4;edVGuhL=+gxIz`%d~5\ճP[AG>5>f9}}R~sU"?-=UcS -abݤ_$[ɷiH'v4ؘuw:2ݣ]XvTM2>Nu+`I79up(L\^MrVs3ؚ+(A|3Hp# ,JJp`I1 9aq̄c֒ҀNrOas7Y9)!t:/'j%961NbNx*{m۸qOobvά ek$-0{ÙcY# Ϲ|;ha8hzAFYyH.* ףv{vCl<&.dа h+RTI9Olsr3}˹` e,O:ݷ'4NPϜh?ʎ@ȥä/2㛛mz8ݱtz4O-|(޼1zۍag(F֞s~-$|6n^<&2LAmf77볈9$@ER+HǮDP09AQN85=O?el%qX۷ 2 M*Jtct`%nLNւAf( {^2H҃-_n_{˙s8s˘[Ƽ?ƜYo9X[c8Xl K{K&(+u3yCA=uCePF-ǼY93H$*P=vłru5h5T*(k*Uf.סqX񤅶oʊlOǼ+r7p. jbE?A9^@;Œ=Sz$I$ۍ=!Id]m{> O\OJ6]"@oaz [*%<})WS-shb8T #&@8BTNrIğKKmDƫS9Nzt 90XFS~ٮֱ١",$L1emr÷8`0- o :k(vǣlvhxEu/\iol3/rxA9O]{B eE$TB!0(] ϕ1_9d_M}~FJqX`os(;FCX)gb-pp>Sx>ɇ?i=}5n»wz}:0{*Ivy2Nqos1>Au+IZ +X9mwP܃P4}uՏXS|tU_)CĩL2dV=`qUk BZ:UaP/tm"aE9Gc]V@5JusZ}mW'ݶYiN$ԙx鳚=/e0_/ʐfnoe6smf]]?l=\[0|]~M'3X92R8_G[ؑV#uN7njaj6?w흴xG1lԆ h_[W$Oڃ(}~^>]ooZdtK`a&/|K%hlôƅ9՝9i?;<0G"/4Vn*+:},T^6u{IOn~J&#ceɸ֛z(T+9Nݼ|Vϼc Xo^Z7c7*B\RTvc&!= Ѻ=j!U N1 } O[xBVډTGB@녑 =r91O0ҁJ]!P|菺 Պxi11/紤qˍ!-W_jF"uHsTGɑQȃP2ô!B+D@^p#d㻖KJE_+j!B*]2' }C"dT7HEB4;ɅJhb9"A4J 9'BQ,8 JTx$2r#:w[im]:GCTKHS;Kɧ Kc30"_hOWkJyZHe!<%gygT2v^im`}EQYĽHH_z @ G9CH@5ډ` #}UJkPG9*%ʢ"CY EؗwxavE6aBCd,ZH'KLP9𢏖:KܜgLcF1jwT,fhGZ_.ӳY09`q&zև;םsg2gC܏֌;:O㋭s)' }F7 7.`(m2]s#ғqJ*KG/7)~+f.MB?Ә0VxRd^=!@6e41PĦ7ky1eu;dq/̋4{pX% ,v7ONӹu@$ףI9#uy D,/6a5ae¥ϭw)Yw໻zn 2+a BhD7z8lcfci17].t܈1VbS=h_%@a#uazXvO#ܾ t=_zgoVwo?OS-##9rusXSTB#`Uii$_74ig uIupx  T׼M3xt8xe*yh=||el[>yzTǭWo?ͳU n``wp?e5ߵN ?v\^X|Dz>~ɰ[Aʺi <2ڴ eEV3N>%ӎAqᚃ6ljҝg(c!c#G꠫?NUo]\NˎqRKZSQXcf*2kd 5KJ,۹.mwueQrf=Ѳ};/vS c$ L 3m^b SSf =?oHDHDo(G'SmL]1uu=q;O"rE.Prkgn'sRmE' =Ҽ3<(0fy25yi맻8fr?kzo'-1,&˃9P95C#6;іڐ]Sy{ٱʅW׶lZ۲2m1!Ҷޚmƽ/+PO$*ݛQ4]?_o\>]Gc㋾LqaKb!gYWc#{a{S[ԊK;eXxn~0}BDwq03}hHۉ;&,SIGAD耍S\j>ܟ̃C݁9i`DcCiO_[nM˼u#כcl^!g5w!7 u$[y:3!@S9:HCJ{L(;;(>Ό.sݘ\;k|7k6*4&c4#B(P5UG;s=<>|q1:z`ͼ[ztvktiNm -1w7&dؖL=(P_o%5%`:`Ka꧙^Uϵ:S_֧gJx8_M >  6qWӡM I ߷zxȡDc8驮Qx6 AL:^>fL<ޙywP8BZA9qZff<+OP|I`'.ec%ERt,yAʄ,g)LCMŖADshyh~ n1Y>׭rhջ%]vBb1^]#E07|yqgYu1,k=Yq]NI. v\&uڑ}Yr0=N20k*U;Zxׅ.JwDH؄X̷=ZQ8Ό/ˤ^옔a^h/6=^M4u`K?<1Bo|˞i ĴzFg \`уQȇsCn]6=7?xfۇy=EۙR n9eMzCv$2" >g}óE5ȂNS po09kpL>VƭSES}͓OO'}嫽zy"T<%X^>;YoN8*OU]|H:K>ܨ?{) U%˿?Oyg|>:<׈--E{/ƣR_e]q^?s!_{u'nvl{;wao>oC̓{wj$53SשϼQzuY쏁A]1[5F߻aB?yx_1#\^+yss!BK~NdEh(El|MP۰-kk5"z9g *낞 5i/ŌKIx axHW]p ?Ѐap| =K${ʼnO|M/.Suzz5zVݢP&:We6o/]r?!Q{墏>{k⊼h,䝰J#Y| (Nkj"*{WTazufr7Z*zBg:H\hrD ?t.cQ 1̃f58iEȍ`2Fl4*BVrֆӰn;:B/.8lcbҞ5VV'6Xba SdW:x֟ jm1A 4ʡQh!*AheV_A, .@8mVS\iƔďDkp+Aܡ@Xr>Rk+,Ïoh$/p\me u8y=:<>ws5bYr at klh6'~ڏ=oJ,pxLMʭZߓ%#'13NO.>dsY o[="p}R4aWV=?ݒr/&v!7 f[R ,&dV)PH 1 i4ʄ$ FFn#Zo%vc#sߨ=4 Dg+jS=!o^c`[i#s0pRU 6H8$Ji A2%tM|V=h6PTP^ST~/*1vYSeMjYيg;uekk Tw()@Bp`yv{sZ{,ʡ[] WlDݦi#tMMfq2M/i]ɀhwI9 R%vZGJaȊi/5N*҆ X2;9iGe"L@H Fi*8Z%Tu"UHUX#kcbXo'S42%L \ #"UԗbX Xck6-֭nr=vEz;*"jUvUbufTA/W}?;Y_l cA[?iA:Aa[8'R<)6g~h!xY {4(B <-r ϯxIpq/PQ(03H$AiI ࠍJGɃ^Iƌmľ7y?v%fU զk/W A$| W1FD2[GO*@3>]w(N p@ZܒrN% ]CO:(Eq6")R J,#P!"C EBp.FA8F'vo__DD4*#@s 2* z[ˀQJ$i;b4􀜉VC0R`qKFy`b0sV! '=5tktKyRE5K==?#&EߴgwkͲ7._ȥ.ȭe| Rrə$7R\id-Fr[6(APSj2@Zծ VE{/F/%hQD 3M@2Jhͬb uq X Zxuxm+^[5vlp} 3bf 5UN3FFU!ՈG`EG<@??1M xOZ5X{\r3Mbn3XX ۆꢙE4Sv>;yYD aIT":H1 CG@ 6"*?xEne@N ~&EƧMOoԸ _j%|i\G2"t ,Vһ"7!FI n4@ U⿘^i LJM^o/T,rfwH}hm%YӪ'X>0̂f8/ K*pa830;pe3B0BROYQpt[5]QۖjWyF/kXwQ@$9Ud/Kŷz;+;RQ?nVgpR'feqQ'9K©",Iܘ}}6ԋ 8}0(9Lsk]tB/޳!&3>WZ<Ô%IKz䡗 h2q<4wz&"Ac/U9CLDŽ6Y (grb9'KKhZuN&qJ0)F%V)I(ePKM#x[NLU ѠЈe8̇opD'i`o`{$|c`O/qUmQƓjA|ݭúD+rҪ(-֦ 1"*g@4lvXF i# 9 wŨJIqRwRĜXK 'uqDOd)䇌bKha:'aa- 8x: y "Y}]nVIFT24xRE$q2u^l#5ʝޡ![D:E'ƀ(^ iZ3F&`*Nlhc;ئ2xH pV$~Hw W`& 4;]y5N[)8 T:gBϹ#J ]#3W(xcr4iM! `<(*L^F:D S9cL"ńt&{F v;WKqts+al`Pհ@ 3Ufw^(h8%o6swiW:7)5l:QqW ~!$nLզD]; 7h1,Xc<{h|rXp/b=+_ߕAGɹ,gH2*P}$0ӟ /d%o]s OkFaW<qcKCΐDfbh>{Ҹ^A MFUhwX28Fȵ ^47`'3o{RMxRJ·7Yh2bT:m&t[ ]M6bބbK$B$R U2WqV|3f BA<`nƚFSvu)_s  .fUD>P^e?ԗIYNS1 T9N3" hW'ًQ>$HV4Y xd{C5UDђi e/:;'Jr3_,=-M2<${^7,(?;:sLX/q!|v=n |F0Q,1}ӷ$yc,lK6a},Ir2/-:/;#9SZ6VWWEn7:~b!^3Q̑үvߧ$\duIƣ^čQG6m7{UV͠{RmԽ zc p=\q2\9Ui2Mշ ̬׶g>S Yk |7Mh TN Bk‘*OabXC1C,hh4<"*|Tw)-VI4W-~M,Rgr {fZ3 I @$KnvҚoI?UT|Q1qFžNq~@!Q{3ФtҌQJ7 PbMQ]vk !, R!n.8RS}("}i =E [e9 *v[9$#у=9~yۘkY}Z[H)DPkpwmiHUz֡tyNGlj@ͻ9qh!6+4]$yr49UL_( $s&hz\1u&,8i-3>c~߯ 4.p_fA֊k1" qX> ".Pit "F!~k,ɏ_ٮR?7蛹y1\mes2xj"`htY$ʙr*&Ta2󂁣NENW?w::@.3m󯮤a%֧_죪0 jZ =>3z3$.!BQY. R|&E 0F *ML7b_&:gkJz^4. d*Jǹ LsI(Q ɢ+K1&hjRiW/f=n3g7%+-0+gA3Ax'jiڵ w:3f?#khd%o*c>Zb?*TR&e<@˷]2SL?PpmlL E`Ig(j5t"<dԴtkLgV vwz3FNىl"INzV Rt]N( _x't~i\J'wr}k4VrTVF:CL\d="ZOwB{ȜCΊKA9vNk+\ykw5fjv we2%5Aš؜yȬmq-]+6UKЛeR˒NocT AE4K&+ۗjվ\_g_L͗75 4wT{hd" @Ab@8ctz/"! 6fjNajm3969M9(|BP^zo%\ `pM)pɋUqJɏS'?=n\6&,{ocm] =u{^ i׿'!ISJ4y~vj.܅ӻwbIje:9?|MmW{{ ho7Sw78;%cC_b$v\" Ҁa>u1̢Eh@xv!CeRdqA%#H+g)mW/UJWw:DNӗGW Z%Lj2<˘:yo5_םc9˃"Jy?5࿼$2˶diEQ)H4̭W/y8OKglY6[q?+ biu(86](ͷ )8שO_فn^Vɍ1dp(uO*^I/ `2wr}et08Mn *@"x F B{)9sYtBd`+T)*m,qU0{Mk/J/^SmSZ98a.-j:Sl'X!D0_ri{>u]0 W?K`(4Rm6zkb'K}!@C SW SXm2xhTͻ9qh!6+U]0z`!Svͭޓ.ښ_=6-e}D2YZU+]Hr_^ϚAVZ}kzOjGx,D欵-*YdVj (.ˎ<k~ X'HcGwp."\$ .D9SU]ń*Rf^0p tTFhA:l/@O:>|&m|tw_eY'_ -jl뱉#Nv#1S#EOf,ף̕FQ.=3k-3z2sGVcV)})n^bzc?~T-)H>"g#&qb_&:.4_//J㲟0 t41 \dA,sTBhg!ꬶxHՋپ~cuͲ1ذCZξ̓ngBPO[ ]>`XFs:hf.kh< <` "igOehg.  8[i)?^_80\0 za5 :L/ BiRZBN{ DuYWd  eUL1)p iCoep)si 4 ]Qi +fV0,f*qs3+I{K# JclZfYvaDT"z+U!vZK"CX:iJn 8q^I#ځc"rf%U !\.[l scpϬ)Ud$͒c: >*`;q&m2Q_hM n7Rީr:}8B^\hG)"ZM S]vۗx)fQ9N'3U6.$ уJ2YIfF+!"mH> Qg\F@j |}eL W)rg.h~S5)eBh7GeO[޾F bvɨ$,j5iåEr2%ibJX ^-E4tsIvNemTg]Yj{x Fҥkݙ/nt5$~z,p`p+}2/hCDPk]r)PpͤջG~M)YL-j&W6qo^F"Fn1HmUT$~& Du-VWH4EXmD^mhǚ.Yqh 9̞&w͡:[2-'xwNh~an'waW|$B0hJ9G#&1MRDe i~)j͍}*> ]_p{' ; фt2ьQP4B:c6!v_nkRwڍ Bi |ښZ,;E\!²SHiecٹBey9fݧiH?&QWp؀/l$yd%us@R*!S4氍>VIw e2x4C“Y,pf-Q9Nw}huq.]]2 S|p=fn'n|СsCwÇt)EOՓntK꨾_[rS<&GQ#)df\I;OnUEQw}4.%k$no6ZA}P-P/V4rRNQ3̙:BR5mpSoSvcbPc$#N_R>@<# rh\E\5 J҃\EJނ\2 ,j%Y8Fx+FA2bL Q%ZvvҮC:wgo#tǹ[rb֨pOooaw:NjC=DC8&>ocq_|EnDקhvW彎r}1{$ꏓ.gaف97ǜIizܫ3xb0-yux{<LcqXEfmbfxw/{~8a7śN]vX!jދxCQ-p:tqRn ;O|-7jP ޵xq+EurhK :4~)G+iakPH~aYBoH{jI3HGL`OHI2ݱ۶u^~;eg h7G"3ma[h-mqu<]E2 >C/t׶Lu c)S|*O,MѬ7uG!̀RU?Lu&wkT evҴC^4߽nڳ%䫒PxMBd:ܰ׶71ǸYr?u4 lmV׎TX9 y  ؞n>Yv a;Ny5= Mqx "QDwf9ˀ68EUKX"! ͏xQM{Zޟ5~z̓]wzƢj 쒧w?Pc5(s5tg^gniЊts=DmPF(BF24 7Z ܈:\AKMFP]䃞OfF,/8:)Mgw,.9s*ʃ&s_8hƄ^ fV Frw\+^\vʨ&ﮊ݌2Y~y7%-*O=ǥrEWpXbX@h ta9i;Eu rag["P|A@"2TJL0еZNXϙ R=dQCGX=[0ݾG>ݕcT$Btnuv:6{ukSX !pCᄶn#B Xaz&75 ~b ߄ {OVt"!s̈`n40-[:g +1-U~ic8v(sGoBg2j5A i}I.x '8&d2,DYƠİ2yLz nMTtIE)" `UTN32; "Ovh\G~Ff.SldF .pBz"Yl="P ʊ%t%f'rg-9=YpI)P/gqI:Tr@(O2FYhfx黁/U_Z:LA zfӫ 瘌MTvrA( ՛-b=+6F)!ηs5Nv|zhkCڎa$8ɀmٺ>&ǟYL|V5j<ӞYdi"I% "I`<9rfj7ZvrVj3\$9nr`ŎwLަâ4"˸jR  =8Q ;g0_NZ?}ul9m㤵%P+N2_ЄT*oJ̆}]Of3t(+=9!#9F(ᄗ@8'呧cX p~Li.ЀkU؝j{iUC}eՖoQc-5!c\5>kOjp^}.hBu9ns90Cᴖxw!b,-"b_+EJv3m<ʢhd&nB̜׳oCi-ߏ'+˰ӱD‰hEezڛkh.]zu'@˲nA?&}:mu !=F5vzM]K`a q#0y֓rf~>%<z6f]sR*{D[> BbAq͛fO (OOS}=09Ws #upV( Pp mJٺ3W-kj|$QmӴLjZ\5nX^3>9.ĉT ]/]"nLd}g!ՁCxj/V8>4o|KhZ^Sʵba^1a?jegA [d9=orĆ~},J:nvh@>hԯfr?h/~ɒ7'{ho_oBkI 6@b? T>` .!nV`eЎ;JK6vK;2%cF(8B+yn'λ}HAt% :Ѭ+rELl Pھ+6}_0̌P#W6qD˚dB_1",!co9(*^JHg"xz*q)6v`Nb֐q^vI8Ҭx9 L7^_}sK1o~L~uttNt/?֥nM.Npe+UGq~/KS[W8'9[PYÃFѦhKisl)9H,Nyl@9Gg6<j0ɪ0:.Lڀĝ8쒵H!ʆh,;|)6}kv.^A7!` +P~"A;4@Zw;N(gZu#sdr{Yç49 o,u`GA2N@YEcR&'|'</ezM@bI˿&磣LX2m`WFD-@3;iH[MKoxs[?p4.ytE>3uweDBɭ

@D~󀠡'rLY%!=%{-ԉ3IjղszSu:L&4 l ?hα v򼿱dsw}{i&]\{Yߠq42.C6uvZ%.@we3b`_W.1"?BYֈ&Ct j̕h"@,FvT/6UDDTR3^&30b>"j|@*ku~98ԲҴ¢.g)eր %3 D%,ܑv q!T֐HN5X<-g7(|SQ*N{a[UȬ՚ 1w%hZ5'eWܮV9(d tOb83vHi<~GR84;ŧ1#8%U8ԚSv0 DdJ.VvxPԁBw/TjrK=%ޕ*u F~eBE UUU-b!rw.@@AVr(HccL3zYALLW)K)#We3D&Bd |I]!Q&AwCK~˪Rڳ`*v@ΝJk]*%hܮlQJyBnp(3TX6̥MΎ{1ߐZl++~Jqyv83;LS7u=pG;L~;$p 14pք=!ǽJ?ɟk5uذtA[(Y IR>,`ǻ拏qH" t@lo.ZgCyb /A"Ȗ:`- kcuE֗XZ8䑻j[XL]YO՝Wţd1nSߐ"`دSI5Ɖ*Q9hN㤬-MT\E94MXO-b:5wul$ŕMdE;Eͯ 9&o;&a Ic-䙴3<7oHs-}rtNW:P,yB7*uD֍_dw̤NxQm5$kwjy+S+S1̋wdHyՠĪ%a߰Y82N^ ҒAK(3h~+AvÑUg8MS72d4! gL /S ೸|A:JiŹ?mQ(&N]D+PA_vHKzކo\SGiy/]?^&'7S?9733'ʞq\MiҔf)mޖ *Axz'C! {Xk7vdM.I2' 82/ ^"G^Pkuoi$|Sv^KU цˠm% r-8x41 rَfa-ogSZp#:e&5t6;ceXM롻vc{^b0?1ӯ/؁9gʈ}&/種8)] Bdy~Cq0C uש>Mp] s6+/K@}%nigbn. @SYr%Iތ?D2Ir#\`Y(LS mR`R_, KiE>U""QĖbpqB&B"%wcA0\؝җH`VP7r젚Υ Ktf8XE)%;7 3XL-aJSc*bQLYMu _=2V.4ӷ?*:G>?xm8Hƹ'+VܬQ}Rdq/~|fSs"(TZz <ќB0c$'KR G)RŕD5DiMV] Rс&mRhe\Tytη^GKQo]+w/açy&DUy9v>ɺ2%hPCdՂ~_lg ^@6Ck#PzhGasM2. guaɁK|Yr`|޿ ~ݽxokcG ʝAA= fCV\ئ`! T<>aݱ }g!5|սk=ȳ3}$7g] \T2o>f2Rf }Pྫ\vdnpTEsXk.}gx 4xEs) F%,(kqѯ'uׄV=vYcxuٯZAmY3Y5ITb'()^GN>.wr3?ǪHY* xP0a|Sh3w ]ݫ6;?@ /Z}x%;AU]b~hDz auhS^*?@G_<; =u vlY~)av7Yw=]|yjċ fa mڜMp|BaYw3_7wް:LccNNy2͆b0@&ӮNg>V!Zykv0n[W&`*PDs~z 7uO*ͽs7m= 0l5Ҝs |t-6nS܎dю̡o4|0o&Xu]+]u#ԋgƼmXmteşۤEh!#x]t{CP8)Sss*bAsybM2Zܢ\\wP1\"W5kxj=T2ߏL>ߞyYv{>ipxn]UA+{seG/kqgHV.\OE=a$rwab2 V9S,$U؇op\նw[ûὯ) 6&iV(S0#i&KPP YEԐ!IΙh:\I+W~=~wy;5@3!^ 9ܼ(ͿfԱր]\yVA'mx B!߾>8>8KKwmQ)'-@]:+>YwirYL阑jlԍA;={ YG=էe*I_NZg3%k3s ]Ox.I˳iY߻pq\4h7 cKmx_B]hZŴQO̓pAY {\E͡9rIY%r%*Ze,"qMVNLH XI +ҕXX4̵RĭR1Lss,k0EZq80oj iVT7ݗ/YQ-wop(zy2ho]BHso9`Q `~G=og5O+RGdTPOdOSȜq4 i7$\^'S|{Na>+q (aB*q(ֈb[%wYϣ6?&? W3$xR:Βa} C;6}BO>9cw|˛vKtfpAϲ k^EQ <6p*ToN'F_vA$%,td"X?uq̜_V<+|[{lZ QS`AߩT`}2 H,B1EINck1RV i #YRgqb*o{Y:ϷlT\ LI4%]|P .ݘ|àd.δx, *fI'c8z)^lхG.j ϻa}/=fϿ<8L@+s;+)rݍ󉁅Hae{aU11n\n˞7F̮g(SJ-9 =4ݳ8ӆ4yp\O1Gftsvj]P$XKoO71'wq\612VQJ,f0pSUJ@+A0q$&"%ۘ3{=}GI'DXd6.{ zÝٔm?iMְb,E$qRD)c l&B2!jÂ҄56Ś6d폲8lkIKst$e7Ӌp6 #v>ouXbDilJ;Bq iR+&H0we`^2%X6tCU$aiD"sC!R$gH%1ط$%WjkVim*iJې!me$cф$RZQ5,Zj#N8,e:1K4% T"ڐA˭m%m+i{-a+kz_tUm"O:*+/\J|&|mr[d6^]Kn]7uoXy⪅|(N5 #pBtv7% vyYU)mowRcʅ"S*ȄK&6$HľZĔJnڰ2o0Q+ˁ-l1rï7.gz,ܻ?{mD”cek1ΰ1ah k.:];ufcq0ߚ?ЎA ߏt'+tM+?ѥ)\zddӌHo^hF-_1zތJF^ȫ9v8)2)h]_헻1rJ7o1gΓܺ ޮ nـjRv9n*&߼5mD_2&lyRUpo$S._?ktuy=@w"\\zL7__ekՁ[LTUPxv`t֠js?v="DZwp<ެmK+#+th$7PkBA9ۢPtؔqQ\a9sl6$"&Q!$Q%[Z U6k54uZRnƀBшZa,A$B0Q'N%F`b]`9K&lJK8hne"JDvyԋg*fiה|r;@AflWEun%fF؈X'"@ #kED ))q$"Va☸4)]lb߅0qҔ8u@2c0KMӪe9/~Ƅe]&ڪ?4&*1HcM5A 55rjp KLĉR,QQm#&JlU5P5+Ӓ<[wr*g5*@%+!K)W1̀p4bKT #+cyHm]V\7g uCik_j`3pۍ "J atjeJ媅 x -HHMB(tԂRZlަlqIR5#n49e$5-XK~Vorҳ`KAdh$c!RAiDdbaJPe[[g2qVrer8OLɴ~_X P/vnQ,6~ _;vrq5ؼͰSɉv4 5mmaL9p\AFHj:禫y7oyy:3/GZ˴iݘ>#mY5}pٻFn,Wuv!a_Lz]J)\LRI*LDJ(J$xCJ*|*i!H4}4B`^*ދ1lP+G'ꀱpN'Q[xS\Ւ ZZEށZÝj!g=g ZP]#[`";; :{ˏUyFv}EPbvxf 5mUne(YY<9oaa&d\ /8g_.r.ͯj4#GIZ$tsG+hŽX>]Up5 ~$` ]쇚L4N`aq,.YFP"N6UC헟z7avhLnvmo$^fKklI"\@\ZXb 6Ĺp/Aq65))x o^y[]voo?^uNؿy;6q T۱ ݾ(9e4 ͐D!Ͱ`AMAZO /7;6].Q%Lꕚ#xS%Tiu\\1C W"΅כ8P;ډӈ[i+Fផvְ ]|7{ݓf_N-S\:)Q 29G]~eVtw@C0F>]Zߧ?$6Wq6~k tѮIWc9qGE}xIGm%w+DmM(6;۵]9OuYĉ^>|4G(r#Y6]r} \iߥϽ]h+B d 3M%`URþ<{kFܡo^vK Wqa0Gc\),ʓm wsE}(DRgT &{ (uۉĊnu3>/Js%yy(M{ F2m am\Rn^jx?gnd$ ܋<U+zytPtQHh kS(jP6^Vg~^<^gU>mjBN/ EF dFs9 MIn0tXpP☑ v}:U&v ;̼w ҹߜTLGI&#)уc9!Q$DO nX IͯDŽκ't\gW}W!}*4VbF8 #;<4*[ۀh@u7œfG9˒ޑ`Np,swQQtZ3z Xn\]%iZ0LpuBP"`E˔+ `F"uIzͨ`&*g1tL8I49 #n{x\4"tT*9\A|j]Wݶ]#A;xt:)'saL,deWZX}z wPdR㉢.D8e:j4 ;*ibk8FRrgÏ[0OLnK2ua,p:$Vتy`=~un))}ur71jrl4-fȚ3¤w&F0lN+Hq'-ȴf:I\y=wFx$.1I4ǁI+5""NǴ0r6\X.׏<O9hګ\ږG(=PNo6ZYebsMyy>0K#c`+ ǂjq^['T@T=,Pa#$ \e B~b%8jKApBb)uVPk=Z@,YcR f1Qu`Z:zYP匣+)pL岓)1y 1FP ªpz J8ZEAY}rI~1B0Q1R"HS&.x>M&?y-"V!r4bm'WqUy< z+OE-0֨4[ʢ1 psa%3;XA#{gq~wt:VڮV m7bl@htܨ8ҎMgv_)tOcwk֜ԛlNJ41<)kmH$R,"\v?(gm&mYL8ZX[IZ'-p2pP6`RNƀm&65v8+($VBz2TF2 Ǐ%]Y6⼽F&jY~ WLޜ1LLrs 1p=ܢ<\-NaG[GIĀ[q8gd,/F08GM;D90)XH є*Z'S'XTp^pjY5̂mf-s5A% bpڂfvjQԢ{Ee/^1oݱ@5Zիv !M9!(m4_3̐|]|UT;f@xl!QNF{?X@'S*yNN׬)1S0dAqL;i`DI5BJa(q"6h Zqne = MB:mꢙg Ol)0<9\0J.Mʮk7զcO(&z'SC V},FI1b0/(R{ (apjjYipja]7=H>+MJ'G^[^$>,7. Ł+KGatZiwLLTvIr|tye(n3hH!0{0W7ٯ`r,G3;nm^{?g-f\Ȃ h23Oh$|0 } 6I&~ԧ<>ndPW+@SS8]_o \͚w75/?kgEC2;{풕MNZATKHAS32l>+>AAٗ]r:{X mfv-vZ2َFMwտQ7K{8dLƎK"CVB A`B9hJx]~u6g}z8߽'o|;5M*@VWJN1x5w>rL{ JF:Ys`ap;&ߣ\sVz1|B!(:5RJ0VCK )>=|U#LE+q֑I"NZg#jcx8U:YzD.P icyt珍:y7`_A7lnU,Pm9_7Un>bw~yݕtyiҒ|J^olry:+Kt~};a#d,f#(m Ϝw[+ar"w[Zb{7u܎~U/+{s`77f)^)COH6O )l8L )<ކI6VrbV-#Iޡx6Wq2/ \Yz3Yo69 BњDi5RbpX3Hn[%Kڐ-kŪd,6{RZku k=y(ִe&nّqgv#ɼTج$ZX'j{(xWeE-F k 7徜rxޅO^޷ݜ=) QY%HVVuM[$Ku,Ge.N!)ڔTʿR[i+eJ~RߞY{#}v-6!p&V-h;&Xl^ƃ-zEQ6Zm9/),:0ɩ.hh"K[$Q$w5l9O_tcr;‚0ϨyϦyRfœ~^gQc~r׎N`b8˩ ,Y[d|A5N.u":rdI$Z1J&G^rAxвuS,"5=7oX61FrT~ 1^*k4,H.SHR!#(4Oۗ:Y՞/7duߨtwdXZ -Q釋R>gc4#~m E>;; x=v<|l88&?oskfnꎳ }u?V@WmZuyTȕ56Y^7*xK&)&z7b ѹRJЙRkOá!%o1+JYr 'd{ _ ΖF<S_=q+x~f}{jp_?/.?\~&tyv֟O/V{ZbO8O vcշW׉áCcq|4LzjnP d 89taZ#F|8(.C"88/z)gcnnZєLRU Nk'~|3g3I\*ZP|C,Ѧyk]BJɸm)S-6tČ]N@9zA<•:qԑObr1;!)}_jzm^6C(Q֊&KGT%N Ʌs),u4R0)GQ T/&3QE7EALҤrL*/hnE(O.L2r)Uv䤋ꢎ)|ւ_krfq/ A_B}_௴?bv><;f]Ա8vP:EH7(qwEM1bv{wVR?*\_MULg!Q_4^_躮D[6=V8lF͙%EFjaòxX9s¼i%(Uvk8; vZ5fӊ^_*`#&x\Mu{FEC߅&^ʼnm6#ډsmknvYpL?vG?W~.|{yqͦe2H$|/x_[v#HMLk(,,wΑ@od0r0Va6-P.NO UlY9?_gK[G|kF݅BXe-#u`m~z5!hvs ^3niJF{Y;?=~?~8$ o'ZR$)\Jӕ=ȟ.?|/S}AuU\kED~Uy׿BhO$r.̾ka<~YL@{L߿ilho=r0bhZ^-ql yǸ=.xV`$w*y[3<Po:T{]͛vPc\jZ\m,v8^[dmoMwY: R@)e ]s;Y9,~<==+';-t)[EcUHfiM99^Je.oẗ́u= _RCQ*b0BE0IlNT3}tŗ$2b,&e3OimvUx"]+n#cw?6bwwRz.7u=!riSwū/^Zizjkuм?o4V2&%FX\)OCA^/VhvO"R d79e`|gϭ>cF19syyy- +dtL@J`Hʋ\6`8`" NБaf㖇x|z㸪f3z6Z^m}tw/Аnn^X{L;#i?B |s%4ֆ)H3ģy@ۏw4_Ƭ@WAbZ+L;AГAxys"9DO,H]e%tT ` ׄXX.XO8s֖̟`Rdh[` }f.GdB"e(Ne鱗CzuuVd/Jgg<=n_7f"}\w|\џ[nQ.4Wg&WiJ6#& >clJx1y0FG]gN}oy&g| 6$Qh2Į dJ$G/( IZml*?Hxqq{fǛ1]t%WvQꚱ-׼/w`UJ~!=Z% 9eE%I zsB65mڛtZEI(-kه&Wͷ/.p8{{qV}u/~6xm ny*ѫ4&Y!wd I%@NgM/,YCI[]+@ cE.4]tQjzEr(Zi}~Zv0>13gy`Y$ArJ } Kq[7£a1gd hQ2g-0x2X<J,I =~AlC@9ü36\Z7ǔ pi!)|L5JAz&EyY&ϼrpS!3p k.wNu;-HC[ gQDv!@W&ePh^ih5P3)Ճ/&B<2B >ˌ1n&n vd?LxoTkLF1E]LMiKCF mo$mϋA7яkɯbbDvi)j՗=of3{]Ҕ>%?QLk&!b7ev%xDnt&ۡqEl𲐽&楐@rԅ,G m-N=,:R=ّ]b5Q۟ph瓧OOH-FtT25BqҚ"h w*&UߖX+oCYnJ|%tVL3# IIPhު-ɐ?00% б̤ *Ł'?B ֈ0 9&kȨShr?y\}%_c89Ҿ0iɮ`Q4~6r/x-wo-/c1ӓ+_T88`M-Hs6E1VVXV Ega\;0Z-ɗ\{=x>!9x拋32!Z.Df<=8/VExc(4h^>hrPF23gR2C -+9!?S4W@LHA1Ab6tsI~c(_;OO-br N뙚eКSr6*`EXOG" -מԭk]C=XJd+<3@6nH[;Pjm =͵}>ENS-҆T,-"7B 6w3X,+]Z8y!p@PhjȺ!q:=KFZ*.BRbCx#l몢 .MUڰQ٧2?B dVx9eǠ+XѶRpζq[_,ݻ/z[ܶnAQv̴OcDz-X-Il<:$w*Z%:2o#9W.95') &!ƄԊ*xUR8>;S7͘4q94.9($d&*)ԑykͥYx-ոKOM WRy*)ԑyiif+41w0P,ا૦B׺\+a|IjgOBA ;vBg>0o@G!)8'׆rnwE^UISJeݻd):1rS8KHҌx)̻2_!2τ@.RdzZR Ph`] t9K,Sڰp- hpjm:Peaq+V"cώSΌ3j/j?a -z9<"\`%U4:\Z+XqE @\1& o"h*PԱ ՞ U2iqu9T@ Z P MJW p PW6\J/WR(Jbv5l*@,\I%7W$˴S허+%1M`@ Պ]JZ\] ԁ k PJKIm:Pep2RThpr ՞z&x\J[\] "*7(WG3n%8W|pe*V"+{~j/ ɕ漸OV~*KF+ЪJ# ~Ȁpr) ՞k0ʦ]z\bJa#QnПmH[ BO'ObK;S|k޷>Iu.&)M\?!β,go_uS43ۘwp+-H0bW 6Dڄ<$(8ųY|./}M˿<[te- h->*J_r/凅Wm\Ƙȶowm~rƠ~ҺΡ

h׏+=PNUw}?~Z9Ubu%B9hJ; !&j _0?5>yyf~[U*O{揺Qs]m̀ɮ Tv<*|01iupRMKR[B@ub8H19|ujW1բO?*5et3-Z3S pm0t#8#DgGq-.]-byjס{.x]{zT¹jAJ (ó7TV|zq>?ȪDcgQUyH1\EϺQYpT)2l[~ƍ}S˾J5=J1mG e'{V{/}϶ٔE7Tyu7T>#e9E3߃;ϡy~%dyurY6J[KT&h1 x LDU&wZvkmL΢WyRcu6N56Z"Zmm̳4_ ͗r*t}6~ůM34=1e'FP-k| z'f--(v~\v\fƥ 1y+$%<$U,BR#\ˬי6wUuݴ|c5)?C [4lS8:ōwaJgo*Kgʭu8{X kk"TНۺ;Z I"b6XRkIӇؠ6v"Clm\/0@0#4\xbMP-M~`+me1m_5W(ܧ\6>4NFiqu2FPP eQ@.#,\Zx\JZ\] (pE\يUoW{ 6'Wg Ke j{ʶ:)ۈp_ Z.\ ! ՞k0Jf[\] ƐpAr4Bł+Tiq*KE[\]<_vScqe:\Zi+TiT ĕ$uq W(׊Xpj9iƟW)-s/Ňh[ߏi@*>O{׋RȚ~ן?~M;lod8UgyU8@7}H~E(hYU׉T1rM~n6+s۟H(Z37.t4'nлX~jz}?&k\2v:87ͤu9}Ɖ͡dMNN9= )E*w@eLǫzEߍa~܋+@;s|T76pcB`)eڧދJS49d\9e0$sJ챘Rn, zV>jJX^^>ZX,kc?a<9<>-"r Q:;Z}0gZp0^?ݼ pNh0 ED)zӻ-vp/i]at}+kG%Nt}<‡Ѷƚ9I+;zR%΀A{JX14oXӘliubZv(+J*ɥfN%GuĦ1nBl07 +Q+ҜԦ,uJqIA_.HֹbkG(+! ,Dž/$ TU?keU9TPU4;|}V5#P͠v~qLˡY*Ln9'Yn[Ԋ an$C1gU!θ}'v;.t*R1RPexA 3S/qSm&?By"ڙv, z'|cˮ2W+J̮*Wlw=,wީ¸OL֠ N8ƃ!HyR!Sft(ZqLW/7wFwݾ[\׋Y[M( %94ӟ˅<Ko󜼮ڋ&f˖d;~Wם4ی V4KV1պ[eNZ7 ڦ'`R!6;qfı#2fscr#IT&IlTp$B`EHd3Μu!l ?^#j3C!jfo_?9䝎щwvKd~IM-aY};vVV)lӧKޮBvP# uC,L3~:3WaLCHoBb@' ;x| ^Eb*$n:Hv2qz+Cɗ1 '~y|id B-,;"@]%}>4\e!{eat1x qVD9,Z*c`Ã!+,./]C jfϽI|^cMvc$^MG7I7 EQIfo4!2no䜲s*%g8H(n1!Y" ;Y%Iklg:\FgU-6w}FplP\÷A0xe̎0;|<\Ѭ; mG&((憡&neԲw^9N&H {vaZ=&IqFҌst\܊ t趩ya I%oƓp-6]~fԈ{f>IrԻnHllxJnK)*&Zi}{*m ݈19mUI<͆bs*-4C_wE_Meh읝l5aF" N焠2*&sVZ qZZ:ڍF$e֞{;VZJ۠ctJz%b-Ev%pJX@ZgP !+.eT>K.M f$ Cw8h)DwYV!Ue 9쁬x^v+Vfj1ܻ!.? 1XB:)zF-bC6X)@<8ua$˖Aļa"C{@}D%|ApZL*P*5 51 8*Ex:@٦;#%v&StUs쥥@L%cF:S¾ON8'my^ |?ܹb"xjO`UT3P Pε) z>HpE@07 \S-cʔrH@#'-#fl .h QQʹCl+<1+*= 4#囝vlVnkXe8H2ha h`Ղ; 1AX)O"ȣ0uC_w@_aEw΋ ǁyAM( B1@50N!2^EG:^ݥbjwT(kbn,Zx1z9Wp1B^+cwb?1Ycfͪң YHӊd= b%FzS(Ҹ7Ɋ<.S{uh]4i`zueY3*1[COͲ M049Y$#qfs.|BH3nE2\ТpҌ~; c  ^0źo7Mu]̜B +s d?wr|q;Buhq(&MPA ce/J="ZN7~h;T2luය#m|H7R2ᰕ@ĹA :OlA8ΫB5C V.n[ޡmw?lw\jugM`N%W7@Vh4Ꝥ/~}U(&o_ W>ǫ{~x&,?ះƟ;vjgavNd7K~ad]L`3K68`yKә9՛_XP9…>`R6Q[i޶bp"ϖSKܛWeНD0s)~/U~[IMsYo0`w6|gr5k(CQG\I]մg9N^<^׽tzw0,~Da J,.T$^0x"3:]"r0P+ǭYMq4jDhd(r##PR+$ɪ,Ĥc 8l&[XfK4|Jm>4Y3~zafI@<`e1;p:bQp̑VS\iƔYx 1KfmO, Μ97eKh ;3zke~O_NlU>ڮ1.E1U _:UɈ6c,p٥ݥz8Kvr] G)M +41qQ|9 ,\O3W\+޽g &O)@H 1 i4$ FFn#ʗo?)/cP-ԑVvRi9l`#(5 ɔ4erV1֥f6n*gZ/.M&\Sgm[T=UӽRGRq=+KX&wS8Xu]J""+; >Y?9Q%fź&WdRYH;oƮû~e2XƓԾ!$ VSxt$Hiau,* R#+8;Y=;>IgDa2΀"Q k>z@21Rg3rP-kNGc=#o'S42%L \ #"Qėb @fE9h>QZGiҞ4(TA+;W,j>$VbLCws}a.4< Ⱥs%XJ A] `|RWꧠ?.6Gy=itij-R Kz t~ϠQDD/ ͹^2PTYa$fL H0HA+r[ݝ3h-U&px%_jwV*ut=2Uaꁍ4:i+.DCe0Ъ$SibPLMŠcrHK\Rq S.( YF4OđE*-h]pNGc>f9k0'"y6AԻfY! 2Du)%cD[V9%bv͘E[)Ӱp{#I$ωɸvu!WYBaal# ØT]8EUQM) -6$Z2{eiW㔇r﹃9+e`t|lib{'%4`_'#\y5mwϥ=vrc;D>iUSkS90nʙ,0-ciPJ,##$9_m@K咸O%4RtUuֳ&=pl[} \Ͼ^=rjR 8='塞t{a*p( Jtn監pS[\&pKլ`g/FތNp5eOXnhMr XXӚp[>pm=#^NB=sm!ǯ }6~.Lo?|d>ˤlr >O8Qxb0%rN.~SOaL~:z f 5t0;;oFn< _v092Mww{#̈q aj4mSGܾ|u-(d*8b=KRP(Ҥܖ ($ 鰞?g;n~Y$}HP2N%mHYbSwJ2NJO2vX㢕=lڦzGp߮N$A5;e,ں>d{xCC#i}p gg/޲'IO1½4z ʡku(o593dztmY8+2†{Faq XXS}^'+Ffب~*OocNdZr6$)SJLdP~O{dq"!}ã;Zheuf=h]]Qܠqks} 7"KGͤjrǣ:/7uqf;-_ؼssLxׯEe7,W>ߗ}ն8"j%fbs%6]>ۛNFTIu ֘w"|@Y& W!\C &1"0hRMFihǦ¦>JTbqb1fIJ`$Q娸\ # jRi`]\) '#?d<&FU`jV& 'g ><Է=uWޥ?A*9+SMídGM `+{pg7bӂ}_lZPJyXl ZAbywtUu_誠Utg$mՃk祫ϴP*_tE@W@W6=F,;?]7tZ ZmqWHWL+e2eI7ۉ Z JEt KN]0둺*p5 ]vPrt JHzEWA]Uy_誠}uw*(%?++) GtU+{ Z J{H_#])E JzCW. ]=N;]ZҕV*`+to/tU*(=++}JxbUQW JHڧ 4WF]f_]MώD>W<3]= s%Ci=؁mzӢGt\Uk{CW@+<݇@Wd!`FËe1`Ԡ5pte׻:٘hq*hԹJ(*!<\Ÿgg1fG?tyFv:lkrJ>lT8.{\kJo/;}dD)+f% >]}iۿ\mhnSc,l蹩k#pW'm<ڇl9Vh pmgm%zuV?*[^x.[> ƃ"4WєޝRܥȒi˄E *Qknu㯳K>ѽmhutWa!Y\ݚŕJ8V)!!P)>Rk?))2xx$>c!@~=rς#&%'LJ}+{>>4vE*ՏEl5TY2Kd.f}] shg#JF֟!CM_tY*b _ojGb)P&U^[S $+ÉQYøޱԂUx k-ܻKdzp}ƝZIDtѲ}+e򜳅WM&qFY+T XJÛQHx x\h6ܲ^K +Wgh]"fr !R,K9$?wTлO[?MOඡ+kYʍJo~9d֙7dF/;ww0 3E+6HyKo 3mZ)^/%*ΤuuD8utuiJfq^PZZVDhSuo&>Ve>NO|*(:L|z83>V)l-6+brD*RŢ᪥/>xYۼbeG9Ldk2CKMO[4m܏x)vI1hc7Fl.4"j@j)]ֈ(bPtHoYlDze֒>)V+pV n\+D;B\NWQ;/M]bRJH3.!BMrW.?5xrc䱫_leK/^0t1p{Gwv[>{j9q&[⬥V=%Qu{-?j*,@E~QF(WE4~a;U2#Ɯݢ# ȑe=7SONUL'pU݊cRvT1Sh0&Ig"ph!g˓@r̒vyzoOSHj1JNT ,$-\ц%6e!~'!ϨH.''HOkBK@h'NOǣ߻xd?դ_s8.cu{w|V OLQ:#Gyr3CY:Ơ2D΢X0aڈ n-%̣=U'ID[H1Nmc!TV]rƣ46j=lۘ1g4\$ IJX┒>و)2B+P(C)7T CD8 q%jb`>;21w$Ph -Zg +ūl0Zk=# J0(I6h|p,Pd%r<2jGp0$G- .)Ad[N҈T Gc;TLpbSʹh6:^ 7W%&HK?rv^XHkXI͸:j͵BTje{&j3S˄s{yT""<Wk3KM NEBR%ZAa[HJ2%5!)Pp0&z ' .'dhy)G8D,.Nx#ҫ BAnKx&ОlЂLetB1H7hn/BEFѪlL Z 7G|Zea.j7ؤ!̥2{٥qN{Ƃ!yԁ3vAkj475Z S/fPٙRDdS&L ]v L\[vK= b#%Xp)`@G>/`  I4ZN+ Uf%RpH9&U h&? tR0L;qP iJP&+yFKΖUY9rbGZTE©FE耏iA׃ p~S&v$s_gЕ D'xWB9" -C 2q] 4Rii=k0$|*C '2!,g+ąlgVڮ\> tVT8[31i:["WNvhsT0C'XCTe%:U4/Dz1SJ\.#F 4ghȉi_?DR3X^R,LS+  Efs)3W-"SQgCSzF,3Eک);CFVLBq"{%jf!aZhf]3زN kYrU9zm5W2wmX~]|1P.fffI1%yk5KLϑmYiIJZ$sC! ^DJ)yH2$|v7,AU8,v!}QV0@(і[]Ÿ %>:À/"V @E7i(%Me;$ `z{Gވ$mP.!5iWE!#eQ:5dxzkn%zvloe]{Ķ0U$x p+P"^c .%{515rTa@#==:Rpk1+8Fd*a;؀70HqiZtVчh4(mFvfi{М>bP~=IKķ4[{XYW5)"I }\Q07PԮvw+_EѰ5(|̓4ܯC}!=a%D~zx 82IA"2̯#zA I!'C& .AGy_ 7)r`}vpuJT"V@cv{'.j5u``^Ւ2M 3"sPVǭxmRSEcXe R0,v "-*A3eI#t)9:#)GV3"W Xc!B{{'++ԜOQy>4~qMY('"zתbe dtpG(Y)+zr?,kDNH F~M*PhNqP. 5"A^,r5@ %xRH"gWVqElDjvB)(eVKk[D6tTԻZ@^[ ۯ^&vtHXDODHo?GlCo 坢@ ⇽z8`7 j0jd@I4Qe[.ok74p)ZMqV>ڽf%`ܧF}_O3޸<8[ *cX}{ !V YYT  #PD# #]De;XEƻ趠3*ZfeσE.\t) )˭ٯH^3 xY'WWeY`bz p.{?@0 E]N]# u䏊,B0ZU)*^0FsI#5k W6) `U_CE$*Xh:%J!=@#!=+]E+ ŸtY ZVJ_s.:ckH25gozwEe AuhK{@Ͼ{Z0p磈&7^f4 nNxLKtK7*u-ޞ B߯bʸ8y1/Pbq=8?K>#3}NeN/SIe|}pRb-a9 oJJ 5ڃPn,+uH#+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:A2hZ:GՎRd3JK4JGR~VR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uOUPM!FӌRAi=%+uQh+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR: 1cKJ (uWf:~v:,GR:$ a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVR:a+uXJVRK߽\'Nh)(j{\~75f]]II{xg7 Sn{#nk$Y/NFYhi׿S}y*}} "uJ!CUS:>6e\;alwo'YR>bB)k3Tw" ]/42,K+ ԜVGnK@zẑab#R[EhzzlvLWmz>nCh@ Z+Bġd:JR>X+}3tEph wtE(c:BAj+6Ϯ o&ϾEP2]%]``/+d+tR|2H(f:BRtƻf>Za:Fr!8̠ Qv++M+tEh9t".0]!](u [+kj'2 BtҵDWnj&"::]J㙮남umItz}/ŭqu\+Ng翼 /~|Ζ>EG8FzB 3.y;ee}_#]{~}SOiIq .(GJ63zOIdjmH S{J(4Njj%Eqkңi[l:Yw[ O'Ii//_w,Օ娬~PMb:!DLN]dYUw_п7%ߗ慛k_]^b6+ur<;/P;?̖uӲLW;x&(9Ge{UuLfe-m}ŭ .i1익g뢥Z?jm`Ӿ<uӋst&01tJSD9X HUJ+[dI15e]?-yOQ384]ϧ{mDnFk~S.|vx)\kL4o&A>\Qk3x'zm{UBfmM|*amy#ꮧQ~A2fwV}^?uxɼ+ nwO}W[}u9 Tp^|s61x4Y[t hy}=W/+vmɯUf /rQ(}J^)>hB;="bKѧ^\]a_sy}6͓[ڽ;\̒`]nmpIG+>+B#EhS<GA1N $pxF?Ņ.ؖwvpg(+'ELjI׊ϫ⫊2^ 1\'iP>]V&{fGc}pN~N{JL#޸r"pMIM")7ԁuVu}99yi|ՋќΤ gR3)%֣ܵ|Kdy}|v}Xgڕ^Za~ 5}q^t_YUS6?JpjL|t偓 æD1=~*vjN74Z\,'+'넰 o r7Ǽ/V+(>3i}SJo+-M\K"?{Ƒ] ca8d:$Y,pXCD8lU )␔4IkG8~MW_uWWȸ#PtN -[LH+ nrP)g> _}] SY}7Ay>ciCyFC_ؐ56O<%ٔOβo'lN`]ֱm~7,.b1 QW+G| /1de.Oڛ0B!M](*L&٨pỔ,I[I`Ee,0wMe\?7-,{rp~~0(=|(A{d3eQ{Zl0M{L>Z+͐~\cjQ2Ic+RwHPkpΘFbJ'4ګ[ 梅w>O9+hgN/e^EZXeTYc@*h,@BA!?'0^9^m܂)R]ݮ'`i"rfZ-*]:Y]_y[Ne:03FoW=Tu¬NilNZ9r:ʯ8|u DVVPzt^^_=LgQntVf=٬*LWP2oVծ{5\ys[iJ_hM}ӧƼ͚lNX 7}'Wێz9liN?-yrg€nɝ1N f;:qo:sNt Fi`w*% <ہ _a*ss;;75[LM(9ÚMYFXB̦b}!fS/C~3!fŖ^F[xg0; tjaxδ P5b=:%\St@NQtsD: 9ʉ(YFZFq9DHGY1WN#@ b1sj4HA10BTHl|L:Yw6 ; O~1^@tjWgyBhc3u=( CfnB)9w(D1guXA:)si 72{ ȝ!;ܸurjMcqPNtJaX2XzCz0!h!ʦ*8m\t?jKh0"#R) J@(C:H(t VugKsEsX>qhPU lÏKzp0L D0[G,@u# yoCb 7IlB 6RQcq çGD@&7>Bwz[ladP &:ط"Ի>gD`ɠDZ` Rn!4WqFPsScS 뫃 c&سfheu}V;L_^U7Z.Iyc J&wsV \9K?΃I/> ;p*ɭ*drEz&M*Z\Ho}u ENQF 16rD_<_)uxjs՜k,鮲cMnvv}[d#}- g8V`vqg]}u;_-{ya^{ݰtL|v1e\ٹdmupin. OxNFm%wdukJ0bF#_vlYPR =N{Z'y6-M/Buhq(&MP ce#n G^zDmtSeWheձU{(L-[ N ^ Aڪ FX{Xʫջ=tv[0Dfi1}yy]2Hc8G5S"MH2LƈpD?{F=0H bw^^88ql$2غXv$[([m5VwY-M9v?8TAm@/?T7FkwR; O'toErȦ~|'o+,0;B- sQ[m#\i$wJ9Ӛd_|P~U[z%XJMPt>9ګFF]]7Ey=I2.9c4խww^@y=2"vÈo!uɂbPa0 &|sRȢ@s0 -7uc+0ߝb3]u{BC]*VTאg5WOɜ8O6h/ &&bA̴d#*=K=$.P>Rۗ*Bw=FId϶@RI<{ P%RԪP&C]J,f,_Ύ='*>6=e{n;.<}hèqx&/!+=b] K_ N$ٮ6gl@4ƺ/A\ LSh/y( yyPcJmp' I  da(3Nh/\څ!((:C㠗ks&!HS2QdLSQ(IdT((1*]Z *Sȍ=fϼn#\[uDQ~lq#KG#4N|S|_~BR6o ݸ. tn=C%y{N-@dt2LX蘜e_ҧuehT)133zg87i*fpX/҇HڽD1q#]7u ݛz_Wg y45 hJ볟Rfu^F7]}lʦͿg%}7LUו |`}UѾm~{ۧT)YGCuVlΡt`<ݼ=ͪݎWE~;l@j5`‚Ϝ&8uTհ%쬚z>b*@1!ԙFUm.b G49h<"]0E5oa1 h/;@"]uɵTxJd+MtjARR;uKR:.k}_g^U?r@h1]w/~~jI^_WS:nM`'{ 2?TW㼛Z=aIk~]PݤtmJ.u'Hէ zko$#)x٫愆tTԱ/)W1}d~_zmv6~y,-n:w=[COʼn١wa4',:jqv(tW F,owpSw+aL;'GW㥨Oќ>gzW$7pUu+5JiWGW}*p7pUUxpU4r#+:jWZ.xoઊ+e_J JiWWR&Wӳw>.,g #=&T:Ofu#O~]eۿןy/^$ЯqsK`7>{ѫ5ߍ&N//|w䧓UyZ0~pW(v;ݺxEn4f5uq2S2]Jr;.`]oX$8 \eɂ)wNjp3\g] s6P/Yы vA#mTjTv F_-T؀sɂugeV8=^Mp}Zo%,rI\á/.vVUJ]l+P*<6+AևWU\WUZ< !< \uH`Q5*;xvER*1#\:ܧ*+ \UiQ:\npezK4?;\=L` W煫IyaR+;c^(=+W;"q} ኤt+RpubNq hO1HE-G5eܠBά0PFZW dUڈ$Ms7c-]M|k!CդA3Ց3ծ.dD.AEż"BXW ]9X`|`7!^|Gʲogq:aragC@5GUY˫W\dpOMoRǧ산Oeϣ2P$h!bI;_TIqWeAp ڑBsHcl 1Bp:zE2ޠ"f O\+,a+Te߮ݖ/E7W4[ePp3r6< Yblr'W&ai=-j~X:߿r畯o8#w4ta'P7~\u:aƓi ZR{k }0D8mBA$V%}V p1-Ғ3Zz+m1Xҥb@M1]Hh\˘(-h Vp\:W 9}$bTsN1t,R[scj&Ζ`ՁMjy2U_ˆJ&OyJdd)tZŁBfVRVn0H0$ .|aC;,n ic!N68w{HIXPbLlJ`r*fe9 a!H,-ŋ˯G͔1B ˏ-]ޫI%qџ7nD6{OVnmI~eֈE`>&X8Ru"zqr>.mAKCFCgeV8 N_T򏛊V34|>lQx '^qؤK+)2& 1LbA "aH& {@zytce]˱fz"8ވrZ`Kw賽KG_dCwB)tpja՘gmOnśp1藧Wmt'Koi9ñ;8=Mr _V$~bRÔ`8>Ez[ҏ1[NZ~_UθCtķM}nUWaSb;H0snvnv {,|53GFa]KDZb}7Mݲ`RwtN(sevB׬; mg&((n CMˋeKo*r$JԽDjnܳwO$3I<.thuV} ^ ӭyJMGp-6Vt.?;ވ53j|;{f}.ޓN7wՐj4\RTLM,P jwtmfVF6Q${'/Ti9I^QENG 'l.?5K r]MC2'}|ܹv!$jY"o,]o|?L"Ӗ{[!uB#"3A;TbVWR9O?KRZ{BͲz umwO~:򻗻g 3e1p@\p"D(8'Q1kR$%LpJfQf5 Ʊ-?vZk/*o)-J ͔$=I |&(MC(\* RA>+$ƞ0a|~?j*r5[S|Yb}cc jf+Y\ɪJ\.Lg BGJs]BB- B*.wiq颮޶p"ϖ廩{7 A%>Q4湎Ǖ/Ώ;'t鋚-.dռ(޾9Xlb7x\mNd8*@+`â">_RYMuaxwc-TA"2YqץI  HnڞfXo8-ofE@fX]U@=4zU2Z }r!K#RH'hQ PFc^ X;à &kUhT !04᠙-(`ZL*P ;TP"HBa9nJwKBsIN"S$C2`9iF5+0fcuQGk

= G @-l!MyZp 4&O"ȳ0uuC_@_[aEk o΋ ǁyAM( B1@50N!2^EG:^=swP K͞링臯Ebk_uqi*<S_;TbZ(^޸Zo?~#(m3{F$swbᙚEݼ2:*d|\x;% қvawzV-7k;gMy2r=3➇1Cm$ZGb|HmÐa$ao@B>LѠ~]afEo$季~C k~C$4SХXCUāR0Zө=Ϻ^]l?Ǔ_Ϗߟ_ߝ|׏'@ 'oAV`)lJy7r)X4ɯK1Q~xX7*IG///ˊ P o|hmho:4Ul9͂o0 9eܿ-4a`W,Do;}BW֪8._雛 - 'f0V9`( 5YS{~m`L$MP]!eEvE3:i0 b]Ws2XFHC1io H+'5FxQvS¤VǶnہP짤{(L-[ N ^ Aڪ@FX{չ=q{hV nE?m ;ǜU͡G0z5oP3aZ0,^P>Zi,~DA& J,.T$҃Aѹ4K3}4>W Ҙyu60 )G5S"MH2Lƈ`Db$2Y1֢؟y:t^^jz甭UExX]z=SjL;M5Wk h[,~48NVaYeP+\)4c`,fأ#KǸ*ѾTƒQc,v.4y]^)[F \;ZÔYɅE|8(C^^3^_^KxĔPK[Q[FM4Ke`eL##Kv:|`>]R} :1 nV*mUc0| lWp$a F СL}*ƺ{uUe]_ 6ѸJ1Yu{^l#qT\R2?()@@!jeRDȼH BlNsEA/Euuob]I2ٴloV&=IDa2΀"Q k>z@21Rg3rP-kNGc o'S42%L \ #"Qėb @fE9h>QZGim¶@wTA+;W,j0¹>)Hs3N @ s* ]m#Ε`+`t%Zu0~,lzR<))6%/(1 ^@y= r^A/sǽV #1cA&D0ht+$EF愗.ƕĈcEt{d}KWOc5/&A<Y10&X 817X[˵y`?*du:]uWc9[f9=#ꦕ[BGV2d)1g(HpS,T,[q0,Ӗ;/Fzfd7Fd_g`,-pYI,RZ"R߮2P֪T^iOJ#-SV;lZ:wGq/xYŦq3x)/Y&xJSd Iƍp!pB&:EXrk"+2+l;EaF2͊"bQ*Gd5FUFA ɭrk٦s7Zpyqn&r'rfs/~zv =7*ӧtI~'\RZq-\7 [ 9 ! ScY"w^H^1(aJƣ)a T**P277Y{.zj6H(_ [C{-`T2lg59 ES~1vIz}5‡ʪ(wsekJ۫?뒾-f4UYp|j7wwm˯rkqZ 5g_PgN.C<B숧bHe]vx]W9Sjp) .d fJZnIg"^BdUC:/IwQ,|BX, IU!⨓H)c0|$ İE,q;E-R$}I'yDeHƵ[]j؅Re ZA7QUS(ԁP59UmKg=y{B @7@ VhXL:uTjr|3{~3a|Hv'7[N!f0f&Q@9`dtfH98qw  I ,Hģ%"`ko[OzgTtýk 'ų,#9ppɼD홌4:kEGZtPNhhuRN1 3os-" iHKkpܞ~yƣpF*DŽv'H +=6¯ ЇWtעl+]:.\!f&k/xtNeS\$a"9qŎ@b9A˯:]=Kv%=9"0N`]Z x{|˂f(,eL?],r\Oj~ݟ|sA7VxmM! qITɣq{PJk=g Uw/ɱHZHj;jTe=xsSCTKիq hc6IsP'ۊ/weK2j\(z!$sT(֩l`Z~+_t%'pv7t]+8 Q2Yҕ"2S8 T*y;bPͅ1 Rr{gIĭ<(ELHyWXń] PLҝ7:h=uEwaϑd5Lvv.e]+DN~ QΑ x:~5 ]!\ }pBwaϑ8~&U3V<]!J8KR/`w?1"s4LxRgSDI)UNGTUE:V"2.r7L^h1??X-ɋ|.DpBD?$Fu&4g򘕧$Ia=aA2e( (VMr hpIZ`^ZiA?.Ɛg])3E1U&[o*kz:\ Z%GA2\x%\a)UZ'=3M:)}0ei=|J@Z8KAhmڙ:R3T{ҋ\d.M$8=!0+D`tȂƨB$Iy UN-ک35 W!Jt ΁bJjA/Bތ>̅0c79ˉYf8h3sL+ۖ\0KksGEp1s !ƫ z4e֯`g*oj_[Zo#ya*Q(#_J&AӄPI$Ry! Fy葲Ct0JI@|cbPMZ3^,ر;K&-rQ*Jal§y^Xbp/qa@փ.N6=+|_~yYKHo.%e%e腁oo*F ~LxeF7~~%74h!Ev$m3<5coDᑖ'/Ou.XS' ơ<@ ꅳ ;}Ѭ@;vFiO*DtaY_hا)̪O} E o feof^L/vWRRnݟK(WLX>J?&)[|q\,NǺڮ/7 8KŇ/3R_($+6$(jmU=8sQ;k95ƃHKMF˰]1+|ձԴolj+I]-(^Y(Em`T/in\g<9L$GvJ%C%F+- ^TC HZrƓ4v\nOE[; Lׯ!~ߺq܍jcۘWڤjx8F赶Y >\ lmaNHHqmEpħņwrV&,nw羫lI+rzuㆆP3r9~|;Xkց&#̹{1 8+u$Jזj˨J]6^e s~7?r6ߪG]PjhREKKډB*5=GVGL=M6rho935'|ۙ\D/r(XKΨPG 0u7x1ox1F"LG[NөVn:}Y)i:qҧEXEւ%\2ͮXW6^r#-pݢ}ִ# l"RŢ W 4ISrmUeu4j^v%k]ᄲv4Ѧ{aԎcѤ KR5\ Y$2LtU'o#JG/R1=R%^*ev:q@k{a|g`%ɺ:e3DogCLbX1&kU⛉ɂ* ~/ǚ:kTAIv+߸!aζ.|3j (h)v_XʱJ#EmonpyRQ^k*,%Y9Ri$]?C7+TW7D詻oR};CM2+?ϮoʥORv`яεxOK<1p`4L:YY8nc)4Lsw.)٥ÀeL?]* zZUotLwݰQ=kyezvFGU &+*K/ŲG"kob^{K-iCWVwGy -PFjz6頕B6e_!z;քθWȮ؈VSw%%b+a+DwV]+D{tut%UFtpuq vgў3Do`%]a>e:DWX$A3 QΑtD;W=}gPʽ(=]]MS|{=9IvRȏèeOxn,Yk'JFEBA*%M] :86{ Ε95 a$AhlFij1]G/g>3}A pJB 3+L$LK m ,=acؠd6)1A۠9e" 63\9^piܶ[kTN*ur*qW2$嵓`([C gm԰?Cэay*#1](̖eW_IwxHX-N9e"3_$0PIzt+N 9fae!  <: @%3L4XA$le)$jbOޛTv N:vEog!ؒMJ &)Քwu:tEjBO.BDSܑS?l~ziD54u^k&JUJ[:I#JVL R3\8  `3ZT\ PR8`9q0 7!AYK%5 b`-rIuyq\OF:@  D ;IeJs#L;OకGǩ,W2UPDAB aqH=Itq d,Z0XEeHt\ '02QIx>E0qN)!gy6hi)\J`F&?\դVf!h㒻y&P0B͊֫3 IH5nz;(-3!-P3'aVir!z"Kz&FnJUѿAOX;a'O\yb,i h=z)q(8nA3* 0J 4tJRVJ)>""@Ǿy$Oc<']B:&JK:JxW1wUJ&ztSFDWQ`2J2:Jv}rsJRJ#Cli=PeRL֫qK,TֻtK%5& BSS4CǝQ,q%X7_Jz|O?&|[^ĎS^Bnj z[9t_*yq"97s/x=_;/y5 Lt) $+~=VvX'qgPhk%6& ՃpcK0xɰVY,nĸaڀceBe5rl9DŽLl_>F{6YX2#Ɩ*' Ml 4U)vr{5cFmС8MW %~2ZvњܪnSL|,/0YĬ{ly4z@wҴzuEv%N׶RZ`> | @J+v<XJgag T>M至e"MJf-s{XUgk_?f볙kz,ltI?VF e<*XSU؛'DOK&9$/HմGϮeN) ɵSrj>Y^r)2í_&'Ozh5bfuBn:B-[rABRJ\Rq5LM^囥)ΚrO4Sm2G1>QK>)QS+Y.  " {㸣4>\C (Hqh$0,`"DzB/ @($uYAtJIB1OY>>+L0F i%,HF X-D';o!e)i=,ÐчVe éoWIJLa=}DH3'G$ +O{~ !yt#*~4n<]Cy͡œĒ<8.y\jXRaCr%^s. Vы qST 'CSMMLWa< A(5Pl {3 x,œaL1ղ/ g^SHx¬=A(f YE'́9LEt.cBDu$8$xp 񂣸b8鐗"L8@DcH l}&`0JO|r'x˱KΠ4Q JW>s"AT8#ŅLP J% 9DYGzxb 閊is[ӸSzX Th-Yٽ|4a,Bc ɚ{R/V9# J&`JF7 dM:S#Y 9 t/θac ʪKqi{o?KWll"BЦjΤCl4hbĨހ(#Ul$;z[lۊ5ؑ(yTVxX-oȪ);g'k_|o7' ͗EjtJgř&PS[H!YRR:sҀ`ػ 08Z" VQ'\'H>a  瘶F#!,Hz29MLgKzKJAr,Gj &3L9K5kR45W)9c3J,VPaA5hF^=2t9 1Ȧ[[O‚/~ 働1~y:)~)ơ|H#/XY;Y3},6[z7[P0}XQP{jn||WmH헕Wnc)k.fE%3sX@TT$xh r3í!εq宛IOx@.+Z=L*IJhF27SaF/?jvZ~8o A;m.l$DL W-Pׯ &[kC C5x_6kejyWpUOS3(Nfs* :٨?}_)~\.qk^[(RWHBIAo;uTZΝ-\R EnwZ}{ŀS9PɽVZSFR$b2чhF/{`q{7<~'Jupa?\ʺb۱MX6{yjt8*6ϕOv.=Y֌~I%B>+n@GZ߻٪pK\ϗNASyi=dEj']OƷ6 eT5RAz|Ǎ䖈l{k3H~ 'n uTDtwĞH)IXl60}N?/uCzթի_MmyD[o&b3ınqG@D0"]̀l"x܂i}`bݯcI%Tkכ?OUtF}:󵧛6jB@I%ԫv}>sQ㖨y1lǟ>~:"Lɇbae^_}`"IrTqAs@NOHU Jq䊬WwDeՎͧzwa\b9Emw|X%Y\!R]nM(j:5TwS˕Qj@qiz Z<&.L6aT|#o<7LpӾ2u^O1M1ӯi|P%@(I[]ܧ壏T}Q!D* mEY2ǰDIph(b yTUJڣFmsF}hoUl*=;O=wAEdRj2k^Q#9_]Ggof˥r^osYp.iD^tah`vNyyRxp-ėMS]&5QYUEQ fPj`L+m+Et%@=$`40F?tj:e%vF9͢tIJ*{!. oɊ'9>x`S]X`Ȍ5EsaZ+SI`/yLFr$΄)93@{""O!D*[EHb#Uz<J=!,TT`X(>`qWTNކ5NړNIh?(T{a FSkUCc=dA*TWq6n.8 ݦE)\g}^w|dOHU_#üg*)& q['Y F4CTN(̣cxBcbk0#&H,dh#-m!"$l\(%3".ʏRIXET?bhӃ^9;\ )Y,@C<ֲ{ p{pK_nݱʂ dnзt6(Yl5P'X`|eV݂eu- UጀO/?Z2wtuE8_VuN :RJMʾ}al9p\R)d2Irij,-Q*L묐B,ylk겢{%@xB(#B8"[ӭ_ے*-7T"Qe92*lEk?mK }zB-^&T7Ĕ0;D<G~7.r r.A8A0@MP*+жxhzEELR-k,ϧ Ns$V9Y> k`vԈؚѡhJ.~E).N:LSѾOw5j<5?1C*95Z'ˁ1MR Uߧ13"b8P}y|Uؙ+*\\d1 ['ƺuOj|S#`ݰG#gѺ(!67qN Y'8`';ޡ5d8Q{CrJjNnK8]UjѲ]0ܤ~?KnѶ)b#;M&u3t }fS5dH;": {|泴zbaO!N>64!۰ɪ--f%$0|b2S[nmttS =m?MC_Rzaw7ء {mX،i)XcNIaz rOCb=-uf"栞Ak]VkEΡdGr'i\2|OW Wfrʆ lIS'ソο?g{o 13 rڿ.zuLmKG.Zj|۟Z(a^(sI!g zC?Qa?z\>لyl)O>RK'61&LI3V}/ImV`aؒql>vc:+=P6սu: $ʳJE;=>=Bc-8E2-&SaOp$aL'5 _b;znxfi'-4[|Dҵ ox@aaY8 l~V+-YVǁ![ J A8#0Kef>A v7 ã' ;,U౔U*U[*Ju~$q<=4=*#eT^,*3Ւv:TT87(H.Uw^]KP\ R eF:/"gh ..ؚT'"1,SjwBx\jYóҁG$]12*ծQ*I@:bO{ɂox!DFFOv!ij S%W Me{cX `xxQtO;E4 p's4i`E`1jגwNŨ&;c=A#-kۉ֧h.~ZkVdd`W$#jĆhL1N4.upvN?nUujq]M614 \A,&Ig] hɥAwQ\Nk_0jG#U 평21Iln.y JMo=CZwE^"14J}3|`e$J9!!ԚF)Mj s* [IR eJP:6e%*( ԥ:Ս~ nO5;A0'B)z@ 릎ieHer6X&u'&>?NB* /S@gN )hVHmd0"I |/2?_Fޯ?^&qQr׼^K'$Mn\יּ=! )GM5.kt{[uSS}݅ ֢h^|`h"ӝ{/0->-Hbo͇(Ãƚ کG;ce}7cɚp9ɛwd6>u4Y۲ǦƇcna$ar|%NzhJ(@%+DD&,ɱnB-ޑ]tP^bɘH-\h&C&w-~:c3W/ݜd=y$qKG$(e`!%\I я@sku6"`WrOWԯu"9>;kiEnXAZf и^06|>grެtoix/D yO<.lDsl%#K96ވft,\zކk6-;ȹzc6ՙUF `:3pRTH3Ŵyv iؒ,3QeO1t}]hBʄJՄ6 h7MLஸ!Ryok2fZ1"cnDa,h螜1NߗbItst"g{NM,??ٿ~qWǽ05x9c ?T| ze^فW큨O(ຖ:' :,J]ҲOBT8R\ eཿ[j>7La~3i6xY  XMv~{Y?뗝+*yjz,ʗnYXLG5UN nJ̢`c CQ*.Zz^* ]?|_%jۻז?VS X0RI&\>K,1¤t^i`c&t֮?h<жwU(,!)P) &g ӗ`)U4Ri|xi疰k穒UUJOQ^bbqRg DK`<}mH)b"8S>=AN۷#[_<ĕ"I흝$>r 秞 Uq__^[]O Iύ6{-ط./^k<ro䊦TՂzCUސ=_ܝo"8EQZ9FZODD5}ȮSjYRhoL0}_"?VYU:V`7= z*qGSM9L>"y/G𦾽gz*}TV_o/_)ΠR.;+umx[tݎ?czpvNB4-<6Rr@R!:Wpi0MKx;9EIL\,IcCǛ\{?Ƿ.A ч|6mv,|AZ;> C/Yw< FH6x{N?TV{th71)Y)%W)}UY@T* l2IZ|(UP k5>fiWxN OAi[E\a2\W5RcEm]EA k[b8ZOA*PAĽO2:JI2FkV29l]O7IJ`mR~ k)ietQ 4F%16k"Z#4uZ~Ysmjf=$汼!Lw'Hi빖>Խ2HTߧQC!̛9x;&FnѫAڬRjKKV3:H+%7^Nj ^9Zks]m>6c t@J'%,V*I;Zjy:&eYIǍP,$yHP1,#Ea &=sjPh!l*=WhzaG!hDGjӿGm^-f]c6eCH^LC6+CP+H.aoZbzk%$٫R's?܂/ugcɄՊ#M5M#]7+N""C"HП8HӌmJkrdG Gu}u \ғg^zBK;:- ]LdsX3$awB ofkv:J67 K1oc hG/`-@A2)m A@=sqC"C!kvf(c '@-[|lo \ T tAD0ipB6)b;0~M~̧7#G 3P#Q<0EƬaxژi&jkZrkK95;_[1hد%%߃f`_}lohȌ='x:q[HL`d#ߌ. $qٱPĒi-'l*aGOOO&'Oώ 6X|L+G6\\ܻ{/],=m-nmGz2U~[ØXxv,s [st dB w+^:^k_r!AY#9t.hk1hZd3[sN҇p{淥C"[&z/cME}P]r|xQ/F_\fV7w]}w7iSԃJ3~˜,aS Ip<Ϳ0l|%- ʬ@j 6t3<?D. B }ciJHl| ɽw<7̜ݘo߾鐡hi@M,^JڹbEQQ)cdC\rZ}齶Fm ,䌡Vc=z9aѼ?VfBH?oPٝ'ƚZ^&9f -ioz:{=́bB0wgqڼv_ٿ$ץWJ:a~g>:wnс .'Ŝ-DUiTLz:Vvz6tS4*;3)H.#J!uJ,jW]Ql$lDI4P CDz,ZɚdLZmHi\Io[)hK{9#?b$ (<$Lpڣtzb\ ‚{b?VByH3Qs9adWbm!B=s"\S7@F{ES琵ha1 GkMKnaZܠ!ioSZ=F9׽Nب* y=Z`#r6{( aOL buBHݨZ \+!  FZrɴ `2 ]x"1Ǜ-HىB }kޱ(M(!Xe;ϐ۶ {^-HJk0i![F+ɹ?gnQ+!I*EJ.h?^\[x]0u][a RP\6%(kި{.1TTuHUa$RձLVtNjq5RQ:v7II4M67?6ZDO9C`$3u 0 W"S߉)$6`\MxM Ps~@}ffl8K4',=N\䩔rM8Bi"m.W02{Iu̓h,@ >XE=M8<#G L|^gwmfR9Y묬xFqPBCzn4- UStAlG4b2pd>>d':RZka_r[+N:}"X>`l$v9gthe15㚎\ v{&rٸN&Q/{ߛ;POs d[,r+EnhYk@+Ģ҆ZBC m-A2$n޾uBköT eTK'847ӑ4}~U} tr !J%U&Lڼ(Gs[h; 55!i'By';Qs!0õ`YP+.r OC57 `;T#gd2'l1uur$ʻ2B9LO7a,mq;If$쎠/Œ\-YםZp%T{ v$;Rގ}݊otuL9A/\/\h49AD uLVFq!9ȥD#8φ u |kNP灜_p]nY㰖rU f vG))-Z2JOw7.T<϶^4̜SdKAF.QOqhpѝwayF4W>|p׿nXd}]ZW׉y:RUmҴ}/{Wɑ@q~xFnԒeLUWSʬ* VUfe$Gȶ\5\{JeEhy 9 CW<^YO,2 J})Lz_/?ȴ:9;xtXe+Q߻{O_cPLJS+ *iى"KĽuȕ_E+Z~䓶VPrr5dK.Y6Z%D#))Lf6&3mAZE]ڣ^b%Պ بTAŚ"RRLh|c[129$RZAagaA֢=rgAәɼ<;ṙa%32^-5ozBbdZ2U19 i k (s*8#s7%/Z;7gf h9FBM LJs4&aM*|GL΍5jc5¸ccʤ}mQ Cknhf\\oʇ~O}^`hotN6t.ɉSh࠺1 Pٗ2˟ [|@ETm:'N  )DڍHYp6S7x ~n0f$ڽp>K  ~N^J8?x3>4Q܋ŕ2lȅZ1ƃ VF ?fǝ0uQNŃD ʇx-2,WV  KeōƦ9,l [)W]i18 P%:$O4*tcgusADbJ@>zj -yաdDj5JACc@W ǨK,#ћ ^J*.,0 e#b;ߓ:.aK#zOϾb$#Z4/iۂv[WCVMR-#r GEiц(KM,&6S\pEAX*t׬}%7|\V||\rJ]$f }>"rz~)/u?M P81y%2&; *MbV jPQIbZ@֏QA !gVL4!hr,O=W@{ _蟊5St֧򵝠rvJh@AL蟻P3Uj2%RxsҖokl{[.>km'N.#Sf UܝL,Jb]贷kVv'Mbh ) FF,{ꃐ^2;ޘW \z ؘڊM;YIBhI0^ӐVUΘFyE7 f_~Z(gkRRw:J~/`KZZ/hj(^UB1bja4)fUޞB&!+COyW8O"taG]8uSrFHαkS ֮! Ξ"!ΞnΞ:&ΒWq>qggĠv#˖/Ѭ-Tm꩚y3o?=*~Q>^9J)AX Z|Pg#O$`2yP@"Q73&FU[V;QBXxĩ}K訪i9 'ZX1Au7zIQ@fj59 V8읏9s^>Tzd8;/R%)LAc{q @WyUqfS¾" c 6׆q@Dݮrz}3 VUW6E\M"kBGȨ`PK]ۋңuKbe6zP(D᫚>܊ajءb,JIS@PͬzVbxkgƒW+3Uɤ.3!-|_y,g~kymdwYo!]_<˿1lP_>T|>LV+Wb\z^Hl clP z퉔 >>0jI}t-շS{)[TgNb 7S.=i˳v#9?/DTڇôȶ"IdaT%Q<%B5B&f)aM-`Ȕ 2Ʀ cdՉٔlS7#O?T`Rh3j%Պ]]TAVDH%AisTҼ^;!haRbzW.@ 8䉼BxgdM>yuG{6>jR^5Lp`ZfTzuO`ɆYn٘9E[38POѨ)L:ˏDC8M qsY:7DUꏃ&TNO2:'TETI5 S%.a/tj&+ά,:EkE">&`9W$pk/33r67QXO^0ٱ#$P/Ց?YT4Z'YA2bЂ q~1KSđd\H0geU1bdI]SҶN6N9o\|Re%Hj_8(>CHŰ (Ybe^jZWZ5Y$#uT"-- pb~),B`1Y]/EcSU ->)W]8dnPfKA$"2y@+$Z˽\r.X& nN\V M d*ncIkh(zFݹxJ>:oZQbEE&iɋMU9erAf+Ai"*^\)?|zǃχX3qU)0) ULt3YǣLIdWIŨa'3`s(֌- Ob*VN9(`3A8=%a` I:Jp8o z2[ f_;co~V@ sᠮ .: ɻ&OYQ&J-2 cH0jw$J[1tAd4bzD"# ئL1˓DjLDqX ['u^4@=?;&5]ѹwRtNns!^|Р!^Xܮr^i˫WvQ )ݻ obCV9ΡVF& Vk!%"f#cr 6M?(TqfX.qE@vcxf"dȯѵKT-Q@QE!jK!T+xj; ^xx3!XhXzs*;A!(򹺟@zcƚ]~aZuTq[l6R|[?\$fj*9ϵKº vʨ]*o3d5%GZhjdzJ6et4RjAI0zKŵ&{; q\1ƏA{2qCQ s?: 4cIo Qգs_ ZUzH]x.ưJSzDsiy{8]Ko#9+B]f.갘۳âa$LmVYk0})JaD2x*r%3d0"}^g؆1>Iɀz#wˇ\Ю#,:eaJHu䀇ڷo?zM6qy`a{ݵx/v©T (]*a*e^ TUn{KJNd+MSM HUfRt(z}T!Lp4H!0<f3J^c"̨\߰irB۴Raq_m0AX}S.UyΌ3\L09^n~J<SPdswhd23G4bqpr߸5g;7$кSTZJ C5sO% !'xn֛ ^#w`;7~1\^S> TLHR5|Xp0kQKw+81i\E bP3"+ o:>梍Fe;Yd4Gu;l1[w1|tK*ʻ7vܾRbF2Юñ[ sq\ha4%ĭ5y*uꅚ\h̺ЛBI-Yz-Ūj(uS#ĕĀ +n# gB2@j{S4綂ct –"+E 0;O{a+3I_L1pbk*p0ej!LJy`jhh6rzλs_͗Xk&X +\7F:Qm I ? I4&Y@oI+* ɒo3?5 :w6YC9́"QM(ڒy(gNJ(+6{j`0 ,aMUD"![$;@G- BEړ;+族L$W*$E $%"zqtGP-}Zr_ ?b04ocCǽ̽!QrXsB.Hsq #j8^98ʃRUZ;^XֽJ4Q* rkQ meW88Hb~0`%a9 $b6$0'U]٭ܭ4:M1;AEԊ`{e* E%^λ'#?v~»~DZruTmr#6F:yk}dSieiw0!LN{j8ǕD_ڶDә>lhyzDC0r1asM(pD^՜Jw{c:R!gnwb؀k"LKqAp΀qFS)]Sdu>C쪭0Ee˷*Aw6Zvhq \]~^5!!g}r&vhl8|-S`oh1,1 @wSXST1% ]6 "'b#BW9㽭_8|l06rnCo/V4.'ca ym M,NĭtI8[,(R֠\,h.ܫB 14׉VXC zʐ 0P0݁2P[Hviĉc\oøIcE|\ͭy]iPo EԨB}, NF"ry!n8}N cG>LEp휴\c.5BT-;ΰځW51>L] d>O-4wPSYxc˥i=IRچ1>S9p)H4e*$!*8okUt ZAȩk;SSkwJhb]VVWBJ+rF+ 4jWʳkhU5HE]+Io؁ZcDd4={OuK+6%qtFҟ^7Od?x%; ɉZjm.a1rD;! .4V[sEukw*-_Ed^G p/m0"0 qyㄒ V4K(NlΓFxBudo[Rl[ n:SR>FV٦K~iGP!Azzu)fv*}K0)yR}^$- ҚQJya|(vO(8 A2Pxh)4ߢr_s@qG8 +YxյD5& RKXh&vnw֪I ebYQ`S8d(p P[ Ԑˡٛtf7uʜ$hm|olk ɜ~ꩨc?LƁ*cY`9W;r06qކbsC׳v**Xz/W+}.Q0CfIQZv{Z?b^X=k s6Е?~/M()Ajk!7.#V$lcx_ i[נ|yc"'")%\{ws,ʼnb \(!&#.W)KB8L8sH?W3x 6]~M#ߞnŽpq"01̎ݚU!c6AɐÕx )mYw_ 5ҒN|ԫ*JIf0154S¥\lNTr؜3 N]Q b"Y7qhYQcmJF#OmJ8WX)JtFm.,js,߂rc8Lh|7[xL3.JT˘1M=܂^(0z SU`fY na%pxZU%HA aF!GM81 qJc](_rˆtO\;Yѵ Wȷu_, FT5F0b-zzKxk٨dzןYAbF`OJ/?Q2Xe,c jǂz' b8}}鶏%0Əe4T0Ƈ3CG]\*`;)02Ц)9MĀ곻y?,dna(@v߹s"Q3Jln<ңJ{na0`n;KL9z!H`z!v ڟC-{APnAvv4n`լ^܂=!.˱ˍlg\h.Ib,;ʜN7JV1/<\7V;: Raz4C_o9 ;u4M'J*]ac!V<VIQ񬴛UsoAwg]w؛cǨJN`w~ zv#\[#.85+Vǩ~? K rltW-'߂_x7c/p#ٗO3lrUp`ο~ւ3fn2GG'᥮a!\ۺ7E7 D`/1s"EE`J8?&?Ϳg[p4Q&|HK8ӽ I p4%yO2_>͟BSOa/i DLfހGgzf h`q_SxjXm]ʙ4ƆD(D9vL[gؘwX֞=RrE aq(TMx A}pt@%?spba"+f~SAGJ)}{\ceOq~"Y⡨ !h7"DFS+9/(K9yzAckǙ$hjq#8DҬWl{yd6]bj x{`Qݎ1ۈc1|tu;'^5WƞJ)iO)#)#"|(+$vTUu"LU0ɴ 9'82W^E2ܚ !QYh"%W4rB's(CS& rf'1k >hxOxE;yD^& <6ugU]+ew$Ikifpkw]t ~|Ȩǹ@μKșF1JX1Eg'TYz]uӧ R LMKCD Q&#Ca=M ;vnbFsH) ĴN/ia6onC{Dð![ I?T= F IŔ|mM_W!.e 0ff5 hC.er} P?Z%֡[.>L53MzCT\µ./_tys[BmM` qG0q$)hWw`[ ha0DYQ+o& g8dm$InZyG= 0@7{vf dQ-j 7J<"jmK,VfFD}7 6;Sfg7?pLzlFO3ҟL3oPO|x׮耨ҙM~[hף罆@v%Gֲ>,H5[3qF0ePj/>eiͷ&l zq'ǻZuAyb'N@D +J*\g]КkPOk H wY*$/san;ww.|`rɈ(#0- %Sro^PQ{{_nܬj=zv&wwB IHMpIL ZGS^`ٲSV4Ge\IEP)Ď^`/2f[؟I tGFJ7ǣ9DW ag'8ydTnr5xU=8둇.Y%7"n:GM..E ;-lXVxQBb\"Z%fMٲƊX32p4u:uk[/@bL@l~#fD7}yquK\4`f&U}ֽCP?y.I uz/7RXDtH1 M/q<ٲd->Y6 ͉wuͭɺ%(P͵cʊxTwPn#2oVi 5_ś\sx16K1kJިl 5ɭ9HnArkRo1A?-|yTOJY Ӟ_:d = ?Br&EY|0ݞ]8 >rZVE>ӊl\CNR s+bs14ǘBLR<&Lr V oK 3xZKAW\'fJe jAnKy#-1V?VLq*F|Yؠfgמ `EZUlCV *"J#WU VSbOY!`{)lOd1l>mp{93̊N=^su}0$9VI(i&"Q4<Z5FVMHX̍<҇*k 9 X82DS** LR[F 6FSls@ xu3^qJۨ9qDAVYM&T&0FE#Tu^5G %\ZpSw6R5N B Wխ<CaOaY eU-k0Xr*\\jsEHN+`"ЀiJ$dqk$srh WJi2s!Z61Y*R+p("?X(Kb]AƻFO|tW`V!y1=jbG&Q&0M"NBe6R ʂA .id Bt ΪKiQJ-YO IG+ěiR`MF L%jm\&ҕq~ma+)ȭL"ZΠBNHJU@[ 7H %Tj,܍W켧&=5IIrOMR]( anMV[di;-El٤ yniot aRp1 1J?}ePWG+;"^ɶ3Gp۸yQ(T=WG1mYiM;Aкr,*: O҂A[v֦m(ۭlQd/K'ԸQxu&mk޶gnFTh3AD<*ϗ D{MO Ri8N 7mxBxr5GrtU AtRԁ?>һiU}_\ÄDNȕmi~'8|:ĖWr&0}L,e2iɅ~,Nk6h<[~ʦ2g?BcDvyhBP'p݃7Klf&4xzYw;:T"Wq MlQ!!:=!U?oO~ngWPIf N.BFQ='xs nlgGE@Iqo ocI[mn.CһFScMVԀflL*8ՌwLa;ԃTp YehDemDZie t@%#$+@腕 ¨-&2VـkuJK̞9v!]KoAܓ -̐(WQEg"&dvDu5%s}=̶?Bxg==usk-ticsTbDJ*E C2R*J/<ޠ^c= @.[5YsKBG- ;a3 D]7UPF)9.- $W9ٲҏTT6MG>z&<5 P&AQmX9\"N9)|>x, 8[:()1,Ӓed%W3pvf_5j&sad| r0W"OV_$-'t_|\fhqF.*'v'׻Xn ږ5+򀸚>K\!P>A@<k03<{Me.=|\Ŋ5{o|{'3 e,h7_ɬݏ|flϒl-:~3ɰU2yРL9p#OWUxݵ򎓕w/$H }}Qn-Gk/"%ۙ0˱ NC&Qii}"U4)ɑv,S]Ni~ZF䍌HO)]h"ND9'E yUՆQ46f3Ԛg}d gh$ݸԺ-k|c+Y*O[EZubOӞ.'*ǘ H \q"[x53,W/[éloW/@u˴tݍaddbۘm׶IG-^.$ͨii}hspSGGpnݑe-L?{3:.hrÄ7`C*;BA{+*wxQZhWxۅ=]6+;_ʶo@6j̦Wr)]Čk趵\1ceͭF=4$ȃ%Xv-Z'tX kZu`5yu1 s7_wHkeo)Mpe'bTFM7 :2BuWH2o\;)F Mty@PfJ|4Rڡt-3|('U 'N:&$w6 2بcVV yŵTU҅҅JJߟj: 4ymɥLfiU:QhiGV4[)_7!jשT T XZ` S^M @S}z[_`FwQgUH>ɤJ0- ǒԦtBS}[VhNkkkkڮN1i l45>| u`s<@忢sa-҅,ehmlצ4o P?{|:*PP*"7Нf/\_1OaD=.̞8kaʋ?~?1Tf̞x>d^Uh\;Ͽ!Sϯ8vjB&hKQ46eyϧ濾8'Cѷ< P#dp96W _АzÎf˳J!zj՗q F޷\UP̆v콠l[݉q**7iy`)<1H*7<ř7~q<}[nDc xH,xttAxgQXfe)j-]{[5.ݻua_[ HtoA>צ5e9MVR.ioA&n NViYLC ܸ4 x6Uu=:`!ِlUԨ -ծr#\<W,1ɕi,K*ो-<|Wa899Ύ,qSE9L/w 󚉽(Y1h-sM=ݿ";GaϋBԞ O)[@g(V`R{VxO-r]R -O`O(5[V*L>Μ8%9 Sܩ0|A.n6- >jjH0}w?{WG\N6fdD~{1|'COF /)j6TQգ5@Mved{1,0>I[W [s#vi˓W/@97Iç/,w3:j5S.TJcgl\//"XH2dgdH1 !A„ j #e4KXmC-Z4OW8X- Cȁ82"^^ 'kB>T0r ɲY0t[w5ٞ]TufoK UZ]- c0ܤ TxȮ?My/gD)L~+{_퓔҇-p٤̐D]2g[MUEh$/\y v- XȵVWIGd?oN6Aќw^ʻT,ū &Twbg"5B17 s#̿:^,,cb򬽁깕 oz&@iP3N@vbj/ M!K0Ea&!g6Mf Y)(lsAଡ ;8TY~lo>>}۹+"QI_}aFl0_2ˊpK0,Pn9S&P| 84A4"Dx;/a%YEn!bbKLr@,4hψ{; _~~xmnk "j7l{ݛh/hq3f|9j!V3yb]3\3ԑlgm!wAEA.NjF [q3\a.̓[^W0zqQ TB!悷تhfM+%[xų?N3;ڟik-#%⽘1΋%. 7T0VIJEH.X NB]npwUG!~:Wn6j)] ughfzc\Q.SE8n&*nj`I:QY0ɘ ,> Qk2@Z5—_KA^/z4n\-S_<f9[j1ҬNfU"asb _`؊ xakIL;xQ?R)-ˋŮ WZ>o?ނO xgNmN1[B–HyN즕v^&|)%%skHybޫ=*u0EMBλϤ 6;@V2<PQ[/Zek] Y'lɭ5b8dܺD%sߚ>փ[[ן]YSR%w9VGL2ߔݮaҸmGcvRݨM7JČnN=Qҭ+FobEaۧD,?qVlJ'WZ2kQ?$̒Ϣ)AOoJ>#apsu8Pr)olsN=?;]<=j7}e}^Pf~vynPыFn9}8A\JQo~6٨M01N'vdm[_7jwtps$g/OZ\0)QKM&in|&SGR#/*+- W[\bIZJ϶'[{:pN/l]ݿ!&؇-7F^-͙L%!‘5Fun{,=jԃ='n{sU[b,X_b I8ٸ Dc6]fy[qa6I ̳:dUƲv$=/@,WF5<0=v[x-Yb` Uo!l&=muJMEp3R¼0{kEmud; ^`0 (P+%;RRYB,c|>Ulb3Oy*^}9U3f=6dn#Q$W#Jզ2{E)s9WDhī0x^o^f_Zl¿ÿK"h&֨>,|5*\q${Қ0͚[*qhU讫~^[)6Kṕ8JekhtdX}L՘00$5X 3#Acel]ÒF{k҆bCQteP5pcS}"UwRjV&ˢSNfhTpU (GlS Rl,ژ8Bb8Yʘ[Amjv?ŵNML/Hj?H#K %6pBa5$o欤]tĎ< O5KDx-0)+w]-0BS"}atx̞[MEwu Di{n! J 1`{ÛwLUNV30mƦ v@i$y ! " ՜RT~Exp9t bo^%qn1VD^쨖Z( d@4hh¿]3ғӊ-s[dt؎/0z՗N#8M׆@V!j|v?-RF0O \ոIAg,[\2RA[Io ڼXC}Y ݗ3IvLc;g&_Eٖe%Hf$ͮ*! u,{:y(Hrz v@O{S*|TȎucHBTOX˴UR6 N_/o׼xB% lX]Y{& .ɢ}IZ4]r*6ơeTu1w(dnzP"9to/M\7+xH0AUW4Mv83)|K-!L}$XHw~=Pcwzp.͟j'/ˤj?c in<;z|Y iG}㟃W]ǨXi8lw&1V'pe%_N W2j敀o5zjb+7r* 5GBj=ɀ#cp%kR<}FC@G_3Oy&7!S]Tϛ7c:hzupyRPbjȿԯJn͏L@iޜvDMAZ`^wޮy=@%_ n}(Yy& d7з@3.l,"b| fr(@E\\dsgA[lb[;M-9G;KXQs@pͶhEE95#U֫H79>/Ww_#@'?vd}S_8L67Sա L V}ef?*ž f]:N]Vs%17`jEyVsv/ x#w:ķF ڭS}i٫^SsD1407LЪMhhuPŠ#P:@C {jhUԄVpy b s |$?f{A lI㾅;gLr2%O/jYN  9aCB0:DDr &/br.5K߆ӌҧrnr1\~`N*i7 MqL. aev<@9H r$|I/_Adrc;|[и{ \jEy^e":ʲPQ/;o4R&qNc1"Cf guz ١a\)fI ˈW|^:s!Ũ41frox[hms}Bz*143{#H Hg}$#t sYz;0{Up~y.7 3ȿ]٧zj'ߣsLԃo6,&{` 3Efk@0hkRaPVx& aoѴEn*,u: [WO5-'B)|S KEGJ LsP4ڬ0K+SS6=ϕBTp#cvE\S7[jX;cbm(Yy&K6R$gc;,Rc(`>צ5gz ؒ _S܂DoycBm̟~ۼ~r*ѱP'wYE\mfv}pDV}l1vfhwR3^Df!moU.RrgCȋ(o 'D:Y0y'"!|#^ ՙ]z'pt+':Ep6ױ~vq2y ƄͨxrQ]: |(>2 v9gZ"v 7ɂPצ4Vm\$uo.g"d% 5 fy`}ysHV>GBK|.NA/G= KBrpjpmЮA~ 7h¡#cM5 OU7-l[ =h'*m U,xHP m׮A~Z=Ed^;(OO(5W8j*b"+hU,q~ذ-{Z 脏YUt ȍN%HA2ZLZ׽^8E;<v VU8a#v885:̸ժ df-733B`*eXq43qF| Mc+;ÐP\.ɝp$QW {7eR(,!4d>_{m;gm=c<f CO9놞aHH/6eA%SF iCTvdaQBa:Mıt]KBiHS`FhnS\SSRWS'si.eK "@*94rYJ͍:iu FL4GŁlhHGƐQzl T#o3%Z^|X,d.hѪ (1Ai6 xC\~Or96.jrĂsOIl|:uAl6CY1^LgI:UrU_sW}0z,UWb~'k D(4~-|5igJ'Bf;ΨT%'pD'B^; Ek~U[D㟵DQ!jdI4qK-kFHÖlҽ%37" x-&GfyW@Km^._ŧR;* /k9ct@椮7n#;a#-dPyFLk[֩>F)$087~.E~r!eŧtWLX4zfk^'.5brX,䎒ϴb9 ~7aqxx&Ms]IkMj#uf=4롪H`XFvFMsm%oF7V7`5= Ŕ-Y^:p !nh7W}o] zBdJ.#|7h-o v>]6vjռ>[WUASj# ]`ãQsnOް)ТbPC_;sQ;c _T }i`vχ]1H\k@Z\eK̇1&@vf_nڨ:,;}z3@`ˉo T5<v<Əa^ucכσLxȋ'*6 ;UuxEjяA|icƦ{TՈTbۊyv W̃N; })OjߗRK*tov]Knz)Q]N+K4SZh ƾ}1Y٪^U!Ȑ]zj19R2>؛VTτu4~P/T[Tyv^! `bדD9uGIR_ro٧Arp.rCOX7w@й2Kdt̺ؓWw+^_}Csғ$oh)B]VʱT~ i>zr.3ű %YE*z!ژJl"d{;]U( ݟxj^H#uJRM6wb0A$P2TjI@ AE؃A9Gr0bg ыox.狣RKP/% {毋֚{c鰸F>wTRr NL*YM4(2Ja J>D=51YE4 sZ3z$q-o[dO$ZJEmaTjÂ?2sЭ$-@2o?R}-򋩖2vϧB+hĀ>^(&REhF( שulmg.|"g܉L?h- .luWψXJ|@5_IaU3;Y3suk]j&>]jh^եf';`KM(;wdb#D>_Ձ޶T* ߢҟ;t|9Hsec:.qpH]ܑ !2E2-qxqgQ}vְ ̱J!_Gb럍 6j̩55DxHNd!cWXm>4A7j'X݅`76#Xjm4j7o7j̩Ql8$j'G1O~kn3O{!7N4jw1֍'F7kkSE郊ډF11m##aARydodUGO9Cѿ\ոBCWإH&Qgi,q1n|!G^ =!n%9L <ǚ;u/Nxrr.+]xԸ}JzN7z!tsduiSZqROw>Hr U^xf}[l @V6VoF|1U, 䬐*|W#3{ 3-El{ϯ ^OûˈH=\Wo |#RC)썳-3z4}ӉU<7F.(i4a*@W s ܄!IuD OB 'HU7Ao-cx_t۪TfqqYSl1qxs[z1Eh))Bd gZT}Afyw{SZWZ`rm/VWUl4f J׻sq> 16GD=!6X["61 'I-0JQ%䇡CjĢTc!;Cԝ^{69gr ǂSJɠ}3Z`t%#ŀbIt;v9~u׎ɑsJWɟ]n5zʀe?D?Ez`HYؙn29aj%  ohOx.?? q۔n*,')Pw^N^Ֆj8ELG=.%z=~wRag3Ph#sP^\LIۜj6:3R%ى=5y99Hn CQ (bjoQ$ ' P6+Q\DHEqFkSYt4[|Q~\9tɚ?g+%LbSUBy0EeZzD#XS_J`Ew>Wbh};޲-|'`oV2hn,G{oZf߂,Z~Lqu缻zcpq杸M9euT[rn|19e8K0!`Ѕ]/JrA08ǒv~%]Pj&TTPXTPy{9IP`ӓnz'.bS݀W_@}0-P ǭ 27,"{%–7c약;F;6;ݡZ#1黁͝2@oc_l1;2fccGl̛^V#ϴ>E,80݌Wsd}݌bKb1avw 31β/ﲯϽb]ޅڔsu]rKe5cHcJGu%1a1fuY=5eM3_pGs.k˚[ru|nt:X91_}#Fg݈3dA$ןf8؀ϣM!7}) }:w9gzw;E-?3H&_?j0}>uRN:wBf?5Z{Қ_ j6s .stޖQN gu^`z}ԉsz NG!vsqoyy>U`P읬kNC9x@7w7Q~vE< k-zD7yJQ &^0a:"V(x_#@v+0U*:̡@ƁoԮQ|OW5jTq#jk[_9>]m*aB5(] zI>xGXsM]9CP^ hOjѠV&@%Zr<[c>O>:~ξ/k UJXBe4 1=|š0>mERBTr`+;Ta "A_02^K7K[{Iބ ?I]㢭ң]#@IؓɶwCcRMpT)WnzFՏYVuClR*,'ZN퀩ERHj[H5#'Jv+o~X$tVfD^L=U"/*>).+ŔA_gLRO Py|W˝F_}v3TKdUO;+WPw2Ë)?>/yy5y e~y//1ؕӊyxե?x~y9k9v!+Z|tvzzgzF_wwJw &A`swq2M6 %${,o5)۴HY(R0I$j>UvLf-|h~@MZܷF=Vavousiv#7SWVB[iD H@#,e8,VS˾&UЎ1ueG?&:2uu%$ )_B)#f@}o8EH a=X8O9zeǍ5?H 2Z%j"4K5&%YsOԪiЩ6TuP:UZ\Щb#y%PwHX 8hR? з8 :f K{d=кD62D  *.;¹53H )$[14핡CRR#ji[)BwPObJ7i<`p-hT0С[V``͕@cs ýFIJ}V \K`k$iPg`VaѐyT_x+_z(CrKuk8,q&38"Q=6'L]Gm0 Fu@ ;F #HŨXw q) MusZE1Bj0ʵaX#Ч!mf-U=W8{~zHP3BN|ed*|Aα}w4nic!}ҍo/ճ6.["+*bJ8ovu6#UiI!bNE1Y!TrvaiXBrl9Xk- 4 5IH*UeN*?! bkMrVy}-c|+oŠJ`jEH73u;ƍ()!F_|J1:=!Ԩz,E(&p߀cv샼,pBd ]~Sfvy|v%W \9q/c+NSU:]Z\4F/GW#`f&!$@AdMI&HbſnX{kXZaC7}[s!kC*"2X"\ }F:[n: &S8D̴$1VXEbBhc(2\*]0$u)-(*5׍m//w{i b+V\y:Eva]K%Uf/0̒(/0"7X&oqg)q\sωsCvGofK0f eHjB؂V&G4;x,,gаضrEmp*5{AuH5K샣TH]sf ͙f%r`qŅ~LsI<<#+X\+wիW'Zzwwfnvf^J/N.z؀*3@e@@3%| / {ұ(͗˂oNHARTG97Fk` -*B0z|9@H6DH1ʀbDRi @И6iL5&НZTИ egDH]*QnԵ5£%vkԙ%W(2($6T `h68O .jA* qƈubEB#4CZ1U)ߏ~fɸz5IRuZL5ݙ5}RfM=g1;c5'6BeY*NF D.,u\7'h҆VLFTRo,R(f*غ: \֪ EY⩧iRړnY!M}4w_?|!ϛfz=w;+ Bԛ!Po)U~+/K T 6 |$gWՈa@Y˥}\D޵nSv~ Mcrn #2u΍GPŮnfݣIgOxFVm&a)}:LSwXV !Zj/[cYI |C.,PFXVOߙŴ?& K%U6! JҒmsQW9%40=ad?g9,w[gErtL`聡AtQ?)* 0-1hȋWUL- ~ }0O 6*=ѨUs^mI;da! 4z)RT<&<%qRb54Їo.}ͺvsvz/GÂ.*^tMշY\I_˅A<-\s;'e/}eARF)7KSMNNQ_ r^䤚x*S; /pM=U4-y2)kKU{s}I(#(^ŝRqs|s$6&Je4Y$|&?{=pry5y[y^Z?4Eŵ3yc)|ܯ$4ɾF:C^&!l_LdTLtTV pB`D7C&yMdT:B<"|f: #ȇD$ nO70(P*I8NZf]=XW"0G5Jk(iOћNFZĉfB|J8Z6@ໆ# YRS9p[L9z۱ 7ԕعMƿsĢuJ/.7{3EUh]mȻrs񷬜}R-KxɄErK#wsDH; 6#Pf8pCvFDQ{q9|ݻ'I56uMo YZqFݥ"EbBH4i4gL:q,eZ,Y(1qm8% %|;}9Q9Ͱ6N5Ibb-ZıD;mGTq.x/ftB9SJv"4"Dܷ8(ɃwKmv<;4w] R:uH.oߝdؽ={f7wdtKTHT UD4ƕ"A&i֜:Rd،TĐ9bH"bܞ i: J8M&DV:9+7.ss6ѸQN6z_Wjqr=u2뒼(ھKW !ıd DL91wG=QuqxXGY-4bd6BG@Y-e.BACvQ)ɽ;dDQ-L]!)ucvU-Ĕ6Wb̠V< ĩ8tI3fXVh(!8 !FXp%6LY!|]zE%*e}6Cf+eÑH#/(8͐;ON|_ȥ#eL EMm&_߇Wev'Pr"=g'/vhEݼ&# `gH-ґk.;Eߌ ;?9+jz5{NΛ,b4ցWe>$~<ȵP_IXwb cPN'Ȧ)QH]`iXU*/[:G/OY4 aB7~;ˆd^*%[́ (_م[<)y\^XϽt6·ȥq^gw?yqm/F"Eٗ;N~0LY*fvNІ;Q@@ N$cC>⛔ Bֹ 67_+b *p_T{ӼTd~-Yw%>>(')?x'7^ 8DuޢIȂTM!W*yd|5_p<{sQS|\M_ I&ꏺqzhM<&U4ou.r[{~y<NR(e.IBn\>V'D5|]`>)ʹk=k 0j:;zշ)rz&t̰Axyh-F6G?[vttPlgV֤ HNL0G< Cjv8F Ӭ~g#fƄwo|}_opv蝞//=)z0tM_R/9˃_?77âvdE9)SNu N,a:)V,."&NMSb/&\+ԖcEL}@5zVP|xݿko=V+nrn6)mߛKo|taxcb4$ U?Des\lLR4%sZ%[k4(wwBkˆ,-$;=6iZTPiTϜsLXiH&@I+o4^q@}|% AJZYJj$Jz3,/ Yow(iHE:Mѻ z43@P_kM;'gO>e.^TIH f*!8Ĥok"iV\tP|`ASn ݟJ8&lcyΥ % \ # /vq!VΏgoe]" fNFa?7q:G ^tz?t+skޜ{?{.^m?3u"ѦyVQ[𮽀zZ{v0bs OjmLP*o_k^?Λ"ǯ#w`XmA"|<Ϭ#+_b͈jrBISCnǎSIr:6NƷɆ}KȻ?ي1 8CXH5 'GӖlLA6y]829˹ %@qfᯪK_Utmw8l)y",ps%?B+_>^ pγ<+Phk'EDJ1Ûp`8  ݯ\ȹ\‘9 jZwMD$+ߋ$|rTcJIhdv ͵f6$&y劺bc6 ?~:b(ƢEϟ*}:r^yu%NC?]LulOir{J(_Q 6j+%!R({*imEarnnck%`0425+i6]ow%SV<𙥈-Bŧл:5|]ia&b}T!:}k'ľt%R-kGԁ#v_ZO-QWLTCOxEsbޗNn>FPZk3 -"Tk F;F7XYyo5EsPMg.mcC_`'LҎJ~tJ,@;Q v?*^-P5\xY }³qUU{Vg mKUdImSWBiłYKNX3ދ,PB4LcseFkQp0Vz/ (n=G*A_|ۋ,xS OJǥ=EN D(0E4(/(11rDaasxJgó`Oo|Qu!ZXjhy<]oo}䀷}dzgbOT ^TvuXkrAV傋[ Л~ 5sp{ >h;`P V+s>rjX%qz0 173{YlZ?:&,7NcA^) YnXܖ,m,@-OiJN-YOG{ `!%aVw] L&%;LJvVmakf wLìѢ׌x#rq'e s郊ὅY+0? /eB\/"cp Ď"Ĵ\K^IH,bd D rŒQ$pRnxa @|ʸ  =3jK~4U{F`Cj Jj3K2̜X`RX^𘣷Y!#'Ŗ9c=[QuXhouQa'&Sᴓ{ʿBl2Lc8ZLnPAT&OMø݃"KjHzCPOEAPqEGtlז뤈VP!h('krJmk_uqs/.mJ8ՊglJ=Z&@ |RFn.y{.2m.1y/{!&M+oūtp^ū->yg9fooa ;pz925GӚm J6.^u->-5WgbdM] za0!G kW!+밨5XpCܾ{d Qd3~Ylɉ- CF+d2!6Ŝ8xFF=*v[Re18Lo wc`!8֑&w>7T\/!$b{Lps7E$R4Zf Gk (S=*從mi tN lf lf.Yj3A(yȧDF9}Hjm=X.m&TfEY+y<v}N擲d~'9-x2Tfvav>WAK8ULcʂDbIx[ɴ19bR%/vvV!6:"Z0yҪjX fTf(2%/xYŦyExg),dyri)㻛 u8VRCAkHThNd-%ɓB_LRU]?$QGLzzd)E b3%PJxfukNq5":PdH=r YEW?Hxkג u)X$ FN5 K$X D; #k!(![/PpP1E2D9><_AF"p&Y2NZaDsʉBG$XD`rM^o 3,ϖ)-`O#4E;t;3~RI&AX%U[좪7%| ύ|v]Ÿ }>9+3I ``zrzO4{}pBWHa|]`sTkHQyVڣA5< ܪ1A`f`sڏ8JSfSiAPӔâ l%+PGֲ4{U0Y/ Y/Ld,y` ű4QK"Z!IoidD{fJ/˟Ԛv~X[94Q(?,+ ,zWbFa7O),@- p!>ܼ:E LT%>\^^\NFp'^u\Q~6KJi̕` а59h Sӓ-Yå)tñ9:X-*};ۮ1 틥SGp8huE5ǕΣ jy%XL;Rݮs[Dֿ3huR1!4={6PJ7 RQԠ`xv^AX?+ƌkõA K&35ș* P{F`#hԭ` ejĶkL MjiWbF#Do]rX=-0CRNF C\E(hFӑK9L(ȝǹCmtn^f@A#*t <,|ZoR j~a:zgjQm)jR_Otz;MќKC jaӗ6Vd+e F~oCHnc:ah^2~/Ԫ z Fnص$Iϓ(EݰK MAKN2@\4LѤCʘwDX͢'%oU;QUîɷ#'9j2PԳ MXCWӐR}/Ri)#RHhB R0뮉$$Lx-[##* clslx [NyOmiTEO-`,A Uo嚃ZBQQiUj5TpCT#skt\+*ݗ,(̀?KA!Tvd)Hmg\e)B/['%v .*S0K|ĶcZUȞU9akv=AF; JzA ඟwOQP~LQHdjPȝZqJ?'b\c5 '@n"RVP+iV~(`0WA".DfA#^`EQxQzP " JQF*QyABϓAZ`M!>ۓKDaA `21L'1- "Pbv%A #!wB'Czs-xo7ŭS{sXݫ.[ "s¹6̖=ߙi]{O chJ>ܕ&Xl_ړ NsEumaU]54K^̮SKB]Un ?֎o/&Hg߆Y|hJ!}z wk: 9d.0 (#VQk) ^KNx`Wn LJ0_Bk4(ŝ_-M1=&BH<`^m}tL;N4*S2@U^U)@Pr20PWЁ^-t@t{ũ}ӄ-M;ݩd?^}x.}ݯYjpw778ш|rd3:3gV*g8}404I//s07(I6ָG !An:jaB''Pޯ3{ D$I=5s~ܒ=w L5?O|u=w{ɷ{(>etz>6Pڽzr.v/)nPj toK}rɹw$Us/ MWȇ똜뫓'ٛ[uJp@V34*BW02 eBam=܋ PLzgh[5p>@+PAd(( Uz #CR!>K4Bpz&:)H 1݈`mQp8N#Bu;;.;*YG5VsȌi{rr'H-Y/'/=*V-t wo2=w0 hmC脧^y\-k#!5SDgU,w(yj9K)K1A@< Ä- fi)wxBf=9G]=LTHXTљE)-mf\+0z&nG0,SATI\Dc!#8EGK4%%HoݠcDTbσ5ii7s,v tf\dS ]we/2T}S$8fwN~gjFf杭#6_yˋouŲ^ٍu-3?) /L޺_6y'JB@fA qu U/Q@KF[i0^Ni $1tڛA5%|x{(Yԃ1ZyAhag+e5d<}O&9 [$g}wq7g!p euM:_Oe ՑQI k=wVx[BPBp2 \ :9d꺦 w 82~c'cPq5+Fq:'`b<ՎUh fϺ6C}w}"yUnr!j<Z)c9]M0яf*24 7j̣T]:u[Ykw:>wz{O9 2 ڇ\Ì7" MA'w}3E>;n\^ tEU:,^=G)^]?%Õ&׹*9M-DC.wA'uXFGR6h_א9dI$ uc$n"8J]xN8>SV xO.J(1H>'#V5#~ZFH˥bq$xx%\ ˣk(NK!1h#o4ð.('lU 9K-At1FG09AF|FHwbՒ+Ї)YTzb0X`XIYm[3)(Xp鼠[{nmVjnjETW kcU oW͹ǐ\ANh%I͇3 S`rz'3y EJZ7hjCjZ rDvv8F&nՄj!$3*=m;@VAQEzݪ nCHg.{˔}jȦZڟ|IÏ{ID}$6\?<ܿ|`$5hQnq WoW=]pLo[B&`.TȻx*u=g|k2ҹ`5MX%ahKtDf*R1dgDBLn2Q 4+I䞠:hSZ?qUo嚃Zx:[d.jJ7_O쟱ՇȑYǜ1Z y&<A_tؽY3Sǔ%pGB)eU`ӥq<*c|r/L*}::a+;oyܽ8ʄ<2%T<[F,39BpQK J9Heax>Ͷ,5:GuY=^W 5R/ XZ£'.,4VƆp此 O Ac[C:0(|ժ[gzy W=w޵#EЗV~0 d X[Jrfߏlv[mGfUEX0&L #dkh)Y J*"c - PlPke# )*& grkRÐq2@8H)0c}\kBvzؤ:AޑoQM u:<펕AQbeN<' y*S/r7Ooy7I K.#e9Fξ# ru%9<#<$C8'3Q!P1Eِe[6]|'as5],H=.ҡBcH#a p GOU ZsA(M/o,|7_G   (H!za?mЋ,\*,F}_Oslt}}?/wViVzlqw^.]z,13O$1nyN|Q@fNR)&`ΕfJ|G tjRVT$?yR/r?N>/O{\+v:Օh1k)+7N o7x/֭/ q tb䬐Z.g'.Wo-ԯvy>4hAF`B$:Li.oo VFpBCȩ 00J[Sq`0(aƝ/bcKqfՓʈo)C&CF."rr]!Bmk+ G1b.rs>qt!{ wu/?Un7}xS9̸dojE%*#IՕZC+uEj"W?.m~,0P pNIUY )(^=}6OCZ<@hρ=!!$\u{8޽ULIef%«b(6Y+yhtFL'L`#m֨V6طȏJ{0R wO vDg~*ɠ 1&@8D2}M-.,C^rVaM͓kR 0^\Q<(*p^)@oFh.p_܂0h LG䨐XESΛޗ\Օⲹ,ωz8ɻytk*9V[|  A7Oj ?1# ѶRX,,`o˟=> B&Lhʀ5Z32B81PqApJVR$.XW%NŐL;7|gbRF)4H*ĭkS& Ϥp"W+.5Z ̞S\B2:S|і`b,ץ?nǯ#*ɺ ]}??ܽy˜`+;>;w#wb Iq3v0Ermşns8z_33j:9Z\?N&]Tb/rHpaNR;x;kp>z\[t Bɞ1!(,þYN\u1U qC""D{e .uXN rk=K!l1 nv(&pD :9L-')h^޿7'IAm?ʌY 4P()(F@#T 0br@1wI>tGAH1tGПRQ{uh+W("be:cn]S Znьz:4䕫hNq{qny7 `/:[ Dub(ݺ\EPּ[4ޭ y*Slvhdg밽:[aOgkx^@i:?jS7S{1c |۠7AH`GuJdeyC&h cTI1x+r{(R#9ό*%f+D4ZG*umRq%Jsa5qQ,JH+'6&1'3%þr"Ɠh{>HGeir!xa{c̓@XT8!%ڗ^ɿifU N {de~nI *6\ݭ$KrWh"DuP k2ñ 1gȞYm KrGET' wZQsӳ[*E6slS‘`lo/VF$Okr, Yڛ`N+S_v yջ7!zDȫ W&¾ue:VY|_8%ĨCpQP&z( gbѠ?ג'[M_$ 1z-rT:'h\]Ύ! Ǣ(?k'$Zk 8l(f+"˩`ϘFЍTm:^&PӕtJ/ﳐǭ/΃G@šS1ˢa k ܦ3?!IR5:ok$h*'\?*&Cr؇~@[P+ טJ(XB,8_bD'Vswmm[m5 hJ*V/bْ[QFe=<.3ܷ|ia:=̪CuUTKdm[jQW·>ZԽ4^-jw ALYp)җn.8@)4U\5ٽU堳> |1^*~tP!{Azyh6^,I|h]4,{]4 >ISUKEujiIQ[xe2omd8kZ\ O^ tb >c*_"\A.*4uY!a;k_!hN_ OFA#1.d^WWՁ4aHS0r CDv}lNkK* w,Ah{vL: f1 D-`vhWڴ]ZCxxqq<1-[zmW:-tMBIi==^6 =P~x(¶Lx dzЃ %F,rrd4sMʈi R'g]yAp^3dH*xi 6ҒQ1` EF%E֫6Mʏ|paw!8ำ E-77iԵH nڐ4)-z_%iu0QiQVqkFukkj GB`@4Yw.Ai!DQ|dp<m5ɚ7㥁Qc]8RHǃp uL\5A~Ԁ8\uhRRjP;^iOĈ{x)^| Yp_{T0pc:XPZO17I:4XNW)x5Ӊ,t|퉯|\N?\ ?+tͧ7(21 ONJ+ #Th8Ir~^wΆ'aug̺oj<ؗHX'CM8F2LM*Rf7EA 9t䒋" #Mq7L܉G{3f?vv:JK<3ʪ !Jhh3rB"uN' :T to{"VdޒT$=޴z2Ubjs Zqb3RC4rꡑGj@ʹa)qF&Fp(г.nS"\Ι)lrH9tSX?~ewA_rogt2v]hCiW">X s4cz/jc~3E:v[ (ulvfv%ۑ/%KDa',&'yc~CGf?`ǩ12J~'(% ć*VҎ١x~0-hGBg;m}ptyfP~ fKVuglי+CSӵNl`ZLQupNhu-Xԝu>7@&=:-~taBN=Lhm0̷o*lY5џ|$C\Cq۰ׇZVkcbu{?-#cD텗ZۿhT6~9CG*ks5s&0 m70OF!yyZۗ#(aG;_ @み˰l'9ŖbIl#G>IR\ +!D1UB {h!ѿ%oSY~Z$(jCmv^󌃀y _KGw/./$Cq;2<uA.>O3?\/ɗ$3b:>yq<{ :]..B.><{Οc4|q _ Acrwqv-@ڋ\ GTu1"]EX񳡷`$ԻEV?|sgw[ wG}} `>O_}QUajwY7{ZOw_[zIcK5;߾~1Of>]\|:coKIޮ~m7>Y-3XGk%-8 \8P#qFy6H&~40S e$`1q 16Il9 Y& lc=ٺ6#t3/de^`<ΩP^+]X73ADaYL)BIBeX}4p'?&fɌ><(wIO{ ʝoS_C^{T.tn WC :Cgq[E/q{!z5u.m*а =<)$5S@ h 8o3F8@Աp ڻ L? %|L)x&e4ⓙ\ŋO;P~t?:z څf+(e'.J0C9.^RidG9%O_۫ϮF_J_=GtM | 򫧃|y gJ^OY 1<9Pn0 I(B1Oa(V 0uԩĒ*K"ja"<^39>\aG5uՎژ yB; z,6. 6BL1C00:NhMr+1s 9&[B]U*=Ct_3gyx| cZu*LLĄS:JdID 2t#oٙdgQyy@ӳ9.φYVaQ3]5S(]ˍ? ëOkbZR0̈C\ ҭ{4>I}H!C ږ) `}# ,Le7s C]uY,8uN(4niO(ЂL]$ߕ u9hۖ;E~;UQ%`; y:76cڅx2 5wMY"ȠTh;o&ߍ>G<of~5vmƝZq-8dE7zVb\0|g({ nJR1%ֆ |4G,-6[ÑK榧cևEZ߳jk$?U8@<6=ȷ}']oAfh`'p-@ձ; - Z)GՏ:Zc+˂VpNƄBQ+ 3D$ e31A 5Qؘxc z<^܀̬JK6.YCo*Q~9ڙU#N=26S'eTvUѫR$ ,Td=#2LPX;($N`Ctb!~>1MBZ wD hl8N 6qME8C[|R+.ؿ܉TThl1u?O~je{J9Vbw%<G\וk8߳וskX.KR9 k E۫#ad[]OᄉtSg睐ޓIR]\1cW==һ&8t($agB Js{&>a.]iM($0 2 f- ĞmE+7M+UkҾrtjهnn1-+G>i>Xߊ%O G!hOjۋf V+\581Ә cI$(A:V^{I^X#IҨ8U p}k6 PvRJ&hS)? %|[a2YZ{4}FI:? \< 02asaӫ)_iH/Tm.F~^OgD */nfMO-RRBHQ.[ !%E}]@JN!RV, d:Qwlfhf I<2YXΛyť}?p!7{۽#vW^߀Ώ`mL 7KP3$|` d, hGC e$`\8q\]2nAMtb)FNzDÙ^A5ikc?BW;oSzde^ǣ!ߛuj @&;0GAXG]9`ҥeI Zsp'~M͠{P;o=ηaiΥ( ޼Z% ^:Cgݚтry."","x%/n&1A(x;A8 mSw`G4—J) &_, ؓEjn RY0p.gy \%NCg#J\u?&?A3tIr(s(uGn;mdgdA1s|dTI~r[|[h Ό{) 9cl)6ˋLubMXh'f v2 )H7u HZ`ac]<Ro'GRD pa,%8}˘`i8RJ&iy_پ˜7gweX4|,Gӏb,CĪAOჲM$[-nA) WbBr^qLC:=":A_|"&E{~uY1A»HX]s1>6"QJ<8jiM̈A؀ ˹¸ D o_sw9k/L$SUz-k`/ϝhf6%E&VLʟ2Ȥcz-`;ރ: 24(W[Q.R+ҘNXP] F S'fjRbcNb0liD"%wNz-Th% e.cG< N5Sx$E'2)E?(XB1g9'Cѐ~R0AXn: +5>n}t.q^J(ձP&1F:]$R sV >O3;ƓQ_Eg5NYHU_t,GAn/Q ;jrJB0L 2ႈą-dPMl 0S+ >P4O0"m#K_v3Tb $eIHYZSER-J",*Ā$ԹչP_t(AN5 Oa@}eCeLD>BJ}CvG5zsT!(>Vk )Ǖ&m_d/A9Q8~X<\2ܠu.0 CLbL)"ߞdmbb4~8?=fP )8i,uXa#?ͳQ8FK_geQb>&Nh@y(DE9#%mY@ ۀRdG\jsd6hrXp?>ο@((}t·ZԮfE\aU1Gz$j'1qJyB󤽠 ~; 1[DC?$cetKZix͗|唌f̳R /}U|橡-}Oez+'w9~oF?ϕzw]'id6Y"V%}xt+ ^2&IITBIcoR$w 8!);4{BS"'4x>RT4d<f1,2k\SnYQK̝9Z CuPWce Sa 6c)$*:Z(*LsӍ;)O識nkjBm?{"mEkOשӯU=C}B- >աm(l5pg= 1Zә.ֿ%5$NQ2'uql BO5+*6Ү/pE +AEYzyx ~TNRe(-*}K&{|f S=V=(Ist[1TQiD ]`]R˄ze0u.@b/ec4 ^EO6#Q_$n+"EJoHb @r3ɥcy 3܌T*{~߲)@C;1}qC* "@g_CX0(9szj3~I5% bNFKi 3 Jϣk߾mw+T EL%Ρ\&>0qLpYk0ǡB A<PB:hUp0b\! .Q(gz,Q!Tԥ6XWOpP(rnWYXeXfb#/gCѳߤ*\ȓo è(Kޡ/Df%.9Hfx{$Uh`X}DĔߊ]ogIH:)~O2Z? Y&αe8X]Ũ¹7mFن7s"*D./\zBY<}Vׇ=(z[Cp(_Ujl\Wcf!]PҾLJ]B? ) Ytd!"(V}YjN(e6fq愢6⡗oA>:!8CnPb ~kj9,ҙ3=5^zuPs􈽳ƛlP(rg7"|3Z@3qAY3"HG#ä#0TZq `^a`g=Gi0f89+y)ܕAKp~IN=?-mf_"4ceڵW`Fvd%vF\AԊkK,.Ke]ܑ JHϑ!4{in\:Z3粜lC\w"C. s8#"v~},R7Tb "Bm: 6-B$)iY"ؚҗTQ+oPڑ 4 1[fA;znq %S^aqٴ>7I:X*vAoaSUխat+ʄhIYUy $@ıP*r3KmUԆ1+HT +H&Aiu&GLSt[p:L\BaʿQ?loG o%1'@ Qp m^ ,1tt ,gh2*(QTj<9iK1aY0Zvuag. `P W( d'vKJTHVo 7frkx׷uڿ@t66"_bلJ+-O"Zࠐ1p.0@ef$.j~#c6#* q˗lkW~gӣpu2_I<υb'_w}e>V@|@>Q4)%oe>\C1@?ooF:ʞ7ڤo]lwr ?mbx2!L ӐɉbqRONpYyĜmz”)=!ƥҞY{zzzB"=QL{:^Tÿ(M6@U1|ҷתo_Vcm??;P΃mFdos(F) 2)Oɻ2,O׃ %5 _(К;d!! /te γt;tw Or͗ړuHOKakb2vbЛhyrn6JmnQg{^lF [Ӣ]]: Ͽu͇ Bq^Θj@2w賮AeZ)Gb]R8bǻ߄̃V 8 cܥ4rT 7gI3(j n`DA06;)Rk{AAB Y,-K+<\2{ _g+ BH LuIW4Oc4[[M5>k'@H$tPº4)!0LF26=CuU[۠A58 802/Z ТM?^C2n);hAs8Ţ8Q/^(HUV<QiQZn'ʾ[CUq^1E |3{T 6~l)bK EVHR xy`1uE7  GH1Tǃvp$5+D,SD#}3viKZbvq=51Y\ ]7I%}ԍ^N0@4F 1ƈ_bX0QiV@j`vǵTPm19iT˻|:ợ_jb-0ȱUkRZD888}FC6\{X΃f&9O,ľ96ȓ$w'ҍ^oa䏔D#$ޫ$"@}p43V_^>Dԟ2 FЄLI裐0|)$(>p0bbJ#[zc$!#ƘrFD`,pߝ/e,9G-pQ 1r>uޱD=в  0#%?z zv;X !}ך"dm 2nRM, W[^cp?C.ϤWup~׹knו儁;ԜS5vc I$i5N%+qfeHRFuk$a JW?YD+Y{lںqjD@@ M qgeIL60y+R'aŹG@Z7tԄ\Fq~Y:Oy `@aѣԟ>>X"@Ob>/Z 2ƍ˕9>G-1 k(d @X Z~ f['6P(d@iBm܈-pfg^u*hfA܅ss$)ݚ  Y'A1h|m->D-A@#u U>TJ@- 17.JnCI$|}RBsuVLkr!6DH.jrB*#NVM]xD܁^ ZR2-.-1ǃ*3-`Q.^>0A l/yYbB·~.!| a R[mΰT8x=[|sy[ 9In1~%GDQ7-N&7ިX" %qI!Ft G"O ǮhS8nʬ'?{x{w{uߔ2rЗKBGtVE%4t4_Q"nq8IGJ۰cHHLF($:ȟDxU|J-n"ąƂ?MK+#YlQJПK:(OTxm鴭7" שVtĥIҺzwOѣ?WN3^,ݨ' `ƿE}G&NdfW(yB엨Mb./?xScC(7רxÏ?_lԓV+$"%xL}}EҟVD2L?xș?R_`"ohJ޸Jha֦~+k<^XQ_`yAXh\-ɗX6 .%RXfİ6cԮ;;ڡF`onh8cqa?7Z=rRrS:q9<{s yk2 (r;ReۭF sfQ85Qx86^9M:ػ޶W})Nh"8=(N/A._X.ZR-9MZ俿$%QԒ( bKȝyfggfgf#VTnDM="|[O5+F<3ڎ)8S?ʄ5͸.kg#-̓ I"Õ~G0 nRߑX8`Θ> @.w-~<5&s!=(y]M+PN kGBr )dpH@flXOޏ ;%~WT6E;%_Go\v<Ԁ%u>M=!p}Ko &E //qƷޤ./r~Ϊ5H.~N_|fѼr"/7E`NrbnɔWxk@fsfm/Vкh6pl%:ݽon㭡 Rgx w_c2O e\9%Q,'1p,B%B >tul =ž$o尽ϟ1};Dɭv=bOPF-S}X2s?̧7| gg0jFG(3ڙdjbB3+"boohOMEC l+_GpH6_݃U2.wfܽxJ{v.=NǏӫKLWV SJfM8Z"w_0JqgFUpJ{/dJZ뱿++>?UO FDj.OŤp2d9e)8ÐpB <7˒ݔ%@Fn`YH|0D:pK-SD.=^_ypi}=K~wOC d?ޔh7uE>|t|008ΰhkm@@®;創p *A /14zd2x$_Z% *rJa# | b6 0rOBx%L!'MO%4sa z{h[?dJܣ!y->cՕݫow{xxWc|FƢFQDCkv8XH?p߳O/潂g_,(]{hwf|%)u,b&MS8W=/!OK NAmr|cfX?p+w8^]Z- Dr/5`qq=Vnac>쀍<:M*lF0: $J}=CYƛhokB|o\itOZJ?-: JkL&~?y9my.&Áuq0OOI/7^o]> F1WX$TZx&\J QM괢'Trb$2k(&,4O͛>܏^[tFYċ7eZ_\SVrk>"m]GڶaBqM$Is0I3Y:b4!P:QmF)6TjxKj-g^iT&[2B%) $OGZ&c)xna*۳j ոH'`g׳!,gJX~)XZ[ڳ?AR:"[;,_5k Sk՟s0ˏRo9z4ER\yP%Ӥ³!3Tk1[l.ˏ W,>ɸk0A·øy_sΕV1^uE3}9#w NnV9')̟zc5&(T홴h ,"I0uakF1JJvU[fZC>]ݐhDtM@فk!偼@}37c]l{5N3DtLc4aqeYj"b-{ +s"o1տL843 ԟu07D;nFL`9B)c0G{rگ( x"=YO?mR ZR[><ȕWt+W--Hpb*qXO(b۞Et`B! Hp !2f /viŔJI,cB;)͐@2m{&ҤQ [)_zcenבw_(.1Z\I|U 9 6(,AM6oy)\"}2n=Q^1vO;9 fr{8tOYKO %Scurjdi3f#HbJ`ԊV7tPql?b(k;Yodsci3³X!MciSN .E1ȪZč: H]6 XWꀸ}]yIqn]ZZtR!{6ՙz ;Q'jԂkly(qvʱVFLVY([ = te=f%Үf%u.V"9} ba#(:6u`S/wĦ'tu\=kE}=듒xѫGU-(FFj[g7p3Ȁ1|~W\}p7~ %~~{gFt0r_H+-uoFU{m{3w f r$x*&R#/Q $RU#ub&$!Tq)2atB8, r%+A4k9.oO轷ŵcwX0CdLJ2(cQM!102fXr)bN tɰI|`QAD@̲]5qB]`".0sG@jXFޱd۴f0Glg0jMvN 9ZKbD0bmzHO$h"DYΙ+0D,=a֕p!tΉm [G@\#sq*4Z\Dg }lzvɗӆK>HZ?, 3;e!s;e`B23E߰S0ua'N"# %uݯC>h?`[ Rsq v ,,獷=XsXl;49Uڝ?U/m/Keè9'}yJ CzaG,n|7E^dyxiG-9>;* ^k;z`O}"xZ 5Pq_lb䞠y:r-q02+7BZs9#豴rggEtxM!=(tnZ8% E4H*4REro;9'S@eQeKW~;Eu"Y{֜ 8?9#4!͓#敉ގhSVXw 95uA@ͶUh\~yD4u,g4qJhM+&vJ k:mHW.{U9(*%++l7DтMa LCx1tOq4z{ #ݿ~^.? ,}ZCgRZh|:w'^P喆CU}w"p* ܘ_Vha  wnn O3e_~(7os<<!0q9rJxWƻ7ޕ `60%DKZY%C\#!IR%-A9"P´5MyG+Z'7G'G䥪U7{.,b4KJ$}5hUiM\6_Y˽Wz)WD")/& ˴e:1@SE(CMRkWYFDDibP!W/G 2) yc*oJߌB!RLȃw7[S4+jDGUU&GS&fc#ɝJcgVVHb&2R8dL0JT*$GS<"'LʌJeqŌ네sl K@"1AJOqLpJ YJU2:9=N/nrq|34Swo?L7p~N|0ZʫwᎹcH>t_׳{ &-^GoLAD D޾ fx9^=`^Sw d]wF%C_?~hQB8ג.]hv|ϵ9g=5c`\0ҥ|~V]t V]ĸa>PٸRMB XvYeXM؅fZEs&t>1P52h/Z85ntW.WEul\re? Ra|<3ニ@Wikz@@֙GӁFS;N {͞\ ģ,NF駫 &ًɛm0z{7claٗYHtSI1s.uX\tk|%]6޹k7yN} @whB 3s4fSS+)K$7K1Dg ܫXlŨ]0ݺaeR6l#eb2$*IHYrl}nE-iZ1f5U гp7F6,و|5ڻ4{a\lpOFK,m X8 __ vĂ_Pxh+r-i Xn/6-&jqrW'pQQ₂Zk82LrҁLKƲ8)DfٻFn%WdyCdg% MȒ!ə ߷ؒb_YUbUK*m [n6JPS H2uİg7L,JxȯRN>,N'_%qmw{>%V6AJ8BA _y%ѯ"W<Dg'Κ@hq/ pĄEw.)VÐ6veMƫK?/χȅn&PGsPӉ܇4t)\YnĄ7.LB Hd?=^LebUML8M 79+}\1y5 -<YX=u{ԅB%jST蕈ߐSL=Ejt#ǩE=&]ݛ75sLHတ?ӱ.a%:>>僘 pLGΌi YuۥuXeru_nރfXݣE!܀s5^`ʨ YPn!wB!B]5 ;SWnBw=0"]5 $;"wZ%ہd9O?4 eXK8,f~7_߭LrAx5pVj#k8n9&njRQ-*#n*R{*%9lQxiz GeQu0;6#fh\xwݘ-?]a]X7Ǜ.=}C_{_o~5'x 61֠,򹧷$[PpkR)Gjˍ+帯Uu@zzyVCƠFX^j_lAwID&uVETl/xnx WE4LT6r]ݵB Ȟ*HQ[s45vԆC?BG6 |`շnxVTLk ^&L|[mx⯀w=/! ;X)19fV'BTэ-ʲTil}![>U.E`n}w T\7{'`'JEu dTt)<s`Rݹz6‡#5xlox} H@Sρu`m9JZ%lh` >`Eۿz6a W YU e%]sSZUmBո]8tހy ^e﾿n÷= 2g]FCWrՓeֻp.\f YY``kURRz"͚?٘l ;t?~RVs,lJsCX_'N=,p.] `hNS4؏~ziwJR+"kvo*F`6Z}B.Rx숖`f) q0&qJZ-%L[8snfX0(Kуje5PbqL=b0E^njJS qs歃Yp;-1(0LEH6Ȗ[.;1uavEhE| VSGޛ#M.r~-ŨXF gT$ x\/c%nu J7^V/P3B)Ox2_,N(`TJeM)@' V]6ŧǻ|7`/H$H@O st=-lZV |d&x<5s]Qzܾy~{`} TxჇi?<^ "b [z|gz:/ٻ>WxO}{&K4aMhe[V"ɑV&⻕/y:⎛faG&AtO uO?۴( q&VPKMa ^3NU I&q 0KH"JHs %,Dq8op<7#oBl>9{ª$Z\GG3_aar2$r~ ADzp1Px`ݫOZJ^ڔ}J#ɣiб͇6=D-c\e΋CȤ8^M`K5YӔ _ E'Wq8(gށ  Q#nS&1Lr|e\uxT;\Q+@(7gl{D(H &xlaI1/ظX% 'MZ ։!܅Tռ)9Q7xr9*L7M(߃a+2f5p0,)/~MP(9i3zrSQbY* e2ޔ}쑴ҧK\[Y^WEHSۿ&"MhgԊ 9H]Jn(0y0e@k t;CE[|)1.T\]#}4zYyE8P'Na74q%k:Z$mG$)|$2H%Q-1,+5(<@BT]Q R\oR>TV -LkV[Z)kٙV G5}"S1]tlNTD 5쎊8>dJT ouTw%vQN:BJ \:{p=NSn3J / tdb$hH }ac*qa_@OЎVP#QHU%: t#(Qd{z?0n'#&H796quܦ-;Ɔ氉b}Xbp簡ɖ'؊oQAX'osK⮾Eтkh Nv-AЊZ/74fT=0irsbmSZ+|tYvvyZ݊~t(F%"hS]<ݸ"=`7yrD,_blLr3Ќ(KjO,?:O:`ngl#v>/G-<mYϟt5)@EGO3PsuW$m,zAMͣ3Wm8m9*uJknyN< ]M<Ҕk_^CjE9;}.<,9btdO\fN- 1ß~/H5ݝ58DiYvh҈1-M~S8kiGݸJ,&F] bp&%}Z ?&9DʍFI |.Q@LLJUlȟͣ`j8*[VʂП6paa]sN5`1ʰ:EaFN9U(0M:*tN@9C$sI@4rz>~ $d138#8S-wIJڏK-@)-u{N' "udlBH,8^=2ꚌW@z5pQgs&ZX& &Ą.iTs%%JrFf|jǥmi6 l4ot=-rjٻ#3IqI6w8ofGK? ~xg/,oMl khu^Ń+'ʿgdfR,9x χ  %Ͳp-#05F抍۟vL `G2$|L8]Wb WB2ɵMb2.D:]&"N޲Ile]y X}sX${e/Gf~ݿL/Gr9e1 \..Eb^#>/#!ST]L?WOt`=SV|8Xp_a*[fj77}0hXA .,u+!D՚K!4V2x ? gV= H^]^eJbHA!ra"ÖSo=c`[TDY Ƶ ]F5Edz<=ny1<<<9O2b ~CE+&{iX '%#!io``jtIR}N"  oMSpݜ֦2cqZAҀ7Չ`z$ΤKg 6/#2\ŏE&$ٻFn%WfϑŀHvL2dڑ%G$bKۖ4f} ԭ&cUX]c< |㡳2W89 lc .H"Xa+;c@ rTbuaT<`hݥӥFR&lvp$GT!5 i6 R:- q|jT4TIcCiCg{e`ÎJs1G.*˕S* h SFArs?($ѵD;F, T aZ"OUXeٞZpAyX2ǑKn~N( F>lHxxN34qXw?=F;imS㲖iEl9*c 0C^5dT!,lۓFGxkQZz5fЭoB L~\/GqO\toɹ#kT^͛d{`&]x\j_] NK9ϊq7w8i2y8vyb4Y}VOx wHcۓV5/niibvnk9ƽcͻc_-cjbqeyN1xŧM NnuK}_-W(ijw/\k} A;|{j?xVw.`Q(ꙏ Sk_}8V$t$~Ew.`E &?qEt"V>Yai>1)0NQ͸=P%аOcc  wqgXw{" ijBt~11FQNTtw>"^(o}dx79e%4N'I ffaSZ1KAsNfN멇2Dj`Q;(C-m&5XGsKkpLZIfk 5VeSL8AQ}%A> q{j1:l}LaD2ړVZjMus?˝???0}u)nFZAB'~]ݺ>$0&g/=)vPɉ\ul̉DbF {(Cx';g2A#H6Y?sZ=k9BKMzH49EA4xك}OPTp]_sLԣjd^8j2*ˬ'Y' '(v6" 3EAia0tj%RK3#Xk\虅3"gRq FCX;9CbK pF4*|kIiϋ9Q#X3?_&0Ѹݵz` j~J VZnFrqr_\,0:$XC!Rқ72^t:kQLV)i>̦I Y1 4˜հN&}94Zq=AkIxnwۏk٢_\}q~7StL?R÷ZL*DCîLZ7lӒb[g՞P$uD "ApFd47ȪXs$cir!Sqi"ٝ [ 8z믦Zr XCfIz:4IfƢŻxԾU R[b;BSM+\5VR,49ǙkNlܐi<m< u-Jbw­M\RhZZ[Q{tAr$*xH=gDc zJ(h4RO0!rDۙK!A> `T*6c.wkQ*^ch5g}oqDőU1~R CF~xx9L2 ib\INӔϽcQϴ`0JcVQؐ -UҬ UˊO;ע.umPlpkKC{) TT$&LP)e^ n(XMͭcx9)"Xccl2%"*Naa(tby}>6 ?_3c?4P>:+ͻbi*Kdן_<2 ~FNonX].60˫`"<,`4x۫Pb}&f9&'p7ſofa\c3]3r2-"AHzpu/! LQ^G?󖍛2/2AQ%g.Y.QLiggӫl9YBpF.f'k.qg;] drٕybitrC&VEϋ`>_:Ȕ{ۙVz? r6H!4}粳p {bB J!p%Q2jB3h}loC$Q/["8F%]V;k\j@3h4NS ʃ褎QEu[2ڭ_!:Y3zHArIkOHdO4Հg$LQKRIҞCʃ褎QE(M$r^d6XVB9D0e=MEE"Ǩw. j/"輋!)vK.dt2O Q$5FI@e,T2@&Dd)f~hIG| u/0qF|҉WRa(Waʳŕ~Xadu^_XgdX1``T] 1mNH,`ZJ~zWp|>\{FGvQUl!"emܓ8QJݓP[*c[Q7ξ :Q@ݓIOWP2Z-ֺ5T E#ƄTWtԨf^"ڷUT/u5T+ A;T%UVKP$R%Tx9I}D7W7}sQVRNQ$3oNE)›zj.\-$jRv!waٗ([t'H2ME>jz ִEH&*%5 #X8$R]Yݛ;u&?*JoڄSj^D$ Xcoy_䕱~Ӻ^n`r Cr'oՐ7]\,PL0rmDTӸn䚅Sz@M9 Aw~>^s1]ϦAx'cW\8R\ \>..¶B%S Ҩ!TٔÃ#Vs ?Y $"0^` ܋ړ.)"M$b+uva66&lqk͋QKj3X)4M 40BTL :$Ҏ T IP7$1(N`1%|UP+bJ+=fWzD$K>$6O +$0:Q"1SA.[hR_e#$} 2<_Iwdx{~チgP[$FaO ׬\$ըl{p[p7dr?:} A*Bp>o> D Cx;a;arXo;M[DdS *Sh1G` 2RD)3,2I-Bxt@ͯ?ǃPT(t/l:`]~\)eaiK|o4+JUEEx8_Jv]FўE{yPeE[F^ ũusererB"`cA80Έu&ϴty ܳh4Gs[Wo~>5.gOSz߫*k17ʵWN#+Lm;xfl&)0sa_6$9* WM^)ɻLT[H|9WǶ 6K?=r 6&ʳ13[QsW-NEda0t0g SS3}7F))c{ot Z"\įsV9іsĚfmZ:)k?-ucn١jIJ烓z׷us40l5":CIy38&-v+3#Qn C1Lф|-l=aHFRTT7˜ٽˬ5ϵ5$ kiVcG.QT †Ig/2Wי\Óp㱀J\ %@{n 6#:Sp0 Is&͈"7_u1>qz2'Q,[(6 ֬(oZsUuYk' (JP\})Ƚ('Q*`gs2D_ hL 4vW-% 콜ݼ7AOU k1xLGq0hoYDk]Ibo+URPɘeH!%::=+jJo<~r ZrNpDJp{(URձ u YPȶN iZƥ#~QI» yrefک\n kPSPc+;M[v[L!ꈕlIĜcnF Dkw#d?wUM&ҹ>p_q_R hᎿVªĴWE]$seטEƥPɏHEI +e9ɕN b/aD< V!՜*A@KNM^50H6Z1l̔甊gN9O'G# m=VAPx4QB` KJ2 +P^? dv^yz#شx{wfzw.""C8+gg ?=}2-9i>F7W[?34/mό 08msB;%ӽ}[SŢ\\I'h¢MJ\D"Rxf I 2x:VXòOMPCqqYcAUH6JEJGBTYNѦDc}DɁ, LSYY\4pi>+Q*;jNUĴ`+QB=*zd(w1υ+=J݁jY;La ~\,lS9LǓgR_6'1 {9|zt^0}(B.".W?6}_/:{( cp ZZ}Pnz<-(SdeQg(SzCp=QW\Lz{Q.܍'<6#P}Dr4ZiY'>\ 퀃°v%RrKTal=1D bAzl:%x-҈کFvZdIJn_\6c"^^&{a<؟aGc`{(s Zr˰Z!ք\5UWCPZMt-šu%E ZkTBQ}u{iN94h1TFv lʍq (N-R΂`q;#Z!ȁr2݌J,0`Mre8V1lGd#alk}B2^-N@D} 놣7``ɝLjqLp\:ڰ :`෠0( ā$P tlXD;S-; `\ 5TËGU Gj.E"Bч S|4-E {Mk rM۰i-""5έW9V1e9+rs9Q xuG2v Մ!+bS,P؉ fJ'[bX*VcX{ӊ%bD!K,NW3[ #Q`kR'NP7`7R{KVy#^KL  gJ0ЖJPeouˢr/\^LdI.&!\6&eL[V.ᜃ6+LsD7,3xPNJɵKS'KZs*|tpvY 5h`LڦORL0XKw)K;Auhu+K-PR*?)#"ˁz|hp=Wt3Qʃ5Z( k='ɨFd*6l57O^1C#A,dBgL4fj,R@u&eY]`Rk]BzWn,eR>|cݪ^0t5P'2\3*Vd>xd!3(DNӻ`AAď y3#͘1&SJ hg2NY,ݫv3BA6~wyI(ÔieL.ǂrJ qȍ*0pFךłOW < ~FEa~QDW6l%ܗua`09aL-n 걓'8NfqZ]۽S$&X{JWWwI[}zI!`m_8 ߎ{3͊S=Ly2KpL0WeYw*}NNvZNkI"I\Lܚl75 =}SOr9N>' vJzI,tk7 m]}O\RVh{٪(d1-XSD7xl_{ߌF냯}o/v}uo3Ó&81m wnӝq4稈-xmkݟ蜭k4WҺj6c3qm8@L5vpӝr4ؒ8Z6mqMb\gf2 /AA,&71lh2XFԂ=al!a#Nƣh0@' tzHkqPM88jEp$68G Զ .ejPIG6a`> NDL,8uS8u*4vN 6!e bmLP*Y6䌽,J$R<$@M؋>G_]\>p`/C_F,4r5qKE~/GZ>޶. ה5 Y̏]6$[웙7!?^39L'k<&D?g 6%&?NI.bH}o"tf?` I,>߯\n /&ɧ(ۻĚg$96 !ϴI<"d 6}!pbə.XS慊Excj<siqcP-L2Te3۪n8[IJɯ~~]~~ ,~RJw@_Ino*YYĕ~‡P?#HN7;0Nat9[ b WPP{|284`J\5ڀ}`A]Qt~Y,H~ }71 vh?S‡)r'!Jc!7Pd<59n @[C,:1҃Lڂ{R9(4Ke)m׶ʢc:{a<ɢ]ysoZrEi&8;f'+JMx5ƚ"I% b x' 2\bj ؒ`.63'@XKGja8>Xo@{<|׋N&TtL{K׽fPc@Jp@ȒS suts*@y`J8AoRSr x)k$I\(KkCB(s'WQ[ XC _T?]Cr?x,%(G1P*8$#;qWSET$j/$Fo+(|~(0 A-%wL 8vN3aAH$Eή<MTCu8 ֋i2/4ԭQ$%guXaTt*;ze!ñ(O $Bc;TF+]R5é`53&vC %D~o>l55[)\bS+DJ~8N>;H5g6uD >0s>D| ұ9| VBɟ>Yી+q>q#nu s˵.W5¦yޝI+<+E=JS Skb 9AZ LJd,hKk JR,1xѬtUd"j#qu2P By譸IIHs+:*޵c"ev< Ӄw&ql$2˒dUR]ȪR RC\lاZWQ eE1Hh˒@Fi&d.l4޳n?b:< 0 #Ic(@. o$io i,(z]+}#P-{wM{$stܛW]ص6Du :?ʐ}9|HZEAHhpw?wA0Gl c. \~ ZGӂPX% 9eQ9ɧy]$idw 1a맷>Tõ} HٌQ%8dd9hRl4\k"rp<[  ,7ք=Ų<:Iq,%)kvEE@! N#FUŗЃ.߫omz<(@.CY_k.BzO̯;F̕aۍQ/ Ņ }A<6IJ ˜0xLKUX \P`&6$H jZz;TeiU €Dئ4xp㛠am't7*p *A3  0DjLy`L `LFCbԋ ӎ#cTj]r#53DtUtS(Nr|ѶhUNHTv69a M+,Hvf^hٻeDhq3OEy&Elw~n/|^Aa>7Hz1>|Ad`z+$֬V-hot+âA:W1bi%W.*ҍ౧ry՟88׌<)[S+w9@V%sSs׆s Zip͑Eۢ aRGqmZwъW[=ߕbR}BQ sw-7~_Z V3IP`pOq!S1XIYZr2gϟߨgm2 4 ? {¨uA=.~O/ /k8eQxS$HHIZ[Ϣ-)il-Xdե?mv?qFo]EInJA)X/0}1S(hm2 *L )Gԕu(L:iL/hޭo׳i,f?[E׆gۻ.6SRN=Y}pg6s۹C<VյxHߜ_ %RźC׳ 1WoZuuMǟea&rߟ7/3I8rux&#eю9J8dw(LQ@ R,|_jq;ׅa\ڿ~O)$Wu.7dymd {e"Fpz}obQCq.$-np^ճxAke԰C/=do58^~]G[=4{j}UKiqذMe*uӌrBNq .=9^愁?|vS$zt0oCj1HۧH]{ttFMqILvj855<=<ҍStn I%mvN==3S\ x$e=MC_?ڤEME]nn˿V.*LzO|/z?bЛ]he|]U!g.A|Wξwn5SGw8"'Unef@ƒ w6$`{CPqX}%!#Aϣc?rAX02ҭ-hhQ]̝Agpr$:$wJ%j4ӹ}\kG:3lL(%CQ6Khd 6%j1iRP.YC+'ƻ^E'£ɀs5(/ ]5ҩrs|0h\) u@%YL xY"hHxR)^jj~4.mqݬ%8&s!֜` \_V N!U|ˆ 35"G9  LTL~8WZ5^5g}:;=G4ӹ] tHwzI3jD)l'{R.g Gyt|8JZ S[Ƥ0cy Ntx$V'eXdL )ٗVfݘM!gxކll[)x94FaԭG@9),7m7S n-8mVMgvᚢkP t & SmOksDgb-ˡt, `:C(sHEEW))2:1*QUE˰>fO:sr}l Js&DJe1Z X@?f?蒱gv3ݭf~[?_5l,zy;7ncaU~P()))))))޾ O BTZ rvFuPT aABca[9$U/hޭk_߮go@ٽZV ]2 YSv֭jlE~>s㭥b)8|˵0$[<>/}Σj`Q/ Ņ1cSS.eAWW*AM_"sU W۶Om)zo2„0/$ j,%|uoLٟ jndK:t^u)#Q籍RrS\/s#h?~?gT⮜kbܢnOȕ1xϰ'gE̕gZF,w+bTY 5s_o@ⰸ! ")QXp?yoZs5DhkPK 0Ip-R #ǐi40(7\hAC/X2`A! V3n1֗T;J6{+&H-AQ !a$@ @Xc sm‘ :xߚsϽzaGf'Pn^ٻ涍WXyPs҇M^'vȗuJ5lA) "N&JM¯g8KfYiit%S$0:vMwic/W{*q,wh\ݒ+,+.:J0+;k ?7(bM`ɔ2m~L2m쵀zc0 .(X4H }|w*O\Gıkz>cvbЙ("o$a]H)RQ%PϱSO/_vN>~5mc/+fx_F d-#d-d-Ǟܳg7dqCrCOr,OrBOrrNW~3tܐc0qk+= X©Z2+0SX )1HT2J& ~!?!rC)N*rP冚.7ԧ7TjfǕZ@0[rp~ )vX)\vUT4Tٮq#ܾFLQ!{xb5qbת 嶴l:2i_~y՜I+FY ǰ,\tT1R഼ӂpNQJ BZF({D46 UBU Z1p}& u.$!`bĸ[7@ro c,HDɿ%@ycv >f!FicN(ah^meC|TgWZ0FCmgѳgVWa\:MD_X!;)eA*JϊQyb )FַD ˉO1dRAƦ ^j o'ur+Aj)kമ/\{}3V9;F߶KILe}u_+ĬUl yV`5[Z^rnk5B Ȕa\qA1ag|螟/JIT7W>ǕBHn$7k̪K<-B՛uAq/6=zos_L[1FweR30sG7,uIK]C],VVc!yГxq? #f:_s)j/'-Zem*g kGCN\EStJ5pȻ=zT N=6n;5Mo{{z64U4I(?D&MZz-&zMt]);n{ [r*SFt#AD4Q{lv8ȩޙwKmАW$0y7Y{[*MT'x&J4wkCCN\EtJèm4[*MT'x9aU!}MCֆ&>FލԚ%R1h:ĻpN54CܒwKEл!'I:f#j=zT N=6n;uMd̻mАW$ֽ=M}-&zM>;/bg-}'|CֆyW&(u~v-ZAt{乡]`%NQv]R+UKP|Gٚ)uuGmZƂ|ѺTSTifߩ.jZw5IMV]i  qu)]j T{ \Pt֮%ڮ壩D]emS۲}V`d]ei h}|5uV-m7!λZWYkR*k hfWY*kZe揦-Decr5*k]eUKPT_eMRZWYk$6wʚt֪%H̎&t֦%(DUܤuV-!z|5%0*k]eUKgeMZBWY;ʚR۾`>DeMm>+kUֺZ@9:ʚu֪%ɎFaBe tJQL*k]eMKD񣫬Qv؁UNF!X!GWYXRUֺZ Uh|SWY*kZ&*k4>u6-nF uV-l=V(a5NUֺZ%(i ЛE z=L|~wOzSGIyg<0liŋav߲1I?/.:W)i}&SgXaDUs((R?/A* ST* aF^N# afyG3uُR+ju beZH~V8 qɥT1*C6Q 0@ư(s s!7iLW~Pqa]}aV,"`"O{b7[âmUNǙFBHfa$QP}ڳ g{Wk0O^?0IFtR譥>>ޖU а5R'PQ)DE#X[PQAȃKᴘ XSa4Yc?)# (cZPv/=!x%Te7ӫ)}* >*/뿰uhyt S'm Q_\\T \^|߯0._Tim!ݱo`r^G# ^Ns?sFfVްLp|pM.Ihl=w}=X1EX)݇y 59cBpDRG`JqC1Hϴ^UUN`q ЧHAk>ؙ̛p=gvw‡ hԸP'/@c[g F1:/Bo2/S({r yDI>3*?Qt#8`>A,k2lgv⿜o2f]s1b@A^Vûyym$ デ!$&ƽ͌:c!䙶9ɔGI-v¾ 3Tp.\Pl.(LJe&J('nn \IxSpGe\ 7a_yœy9ܬS(2 d{R^Z1$X]ڷ//~|syӚ7QF=eқ7!smRݠKX>+8ћÚ7%йߢ^̪<s&y`ee V&0E:t`gf]?m5Ͼ ^Wd~>ЅNtQ-dR0 QkW.2rkp S0w9y߯_b2Z Vmi>-eόsrdjyv<pJY0mftt1kQ2kO\[(Z,U>D_3f,ȃzSW3zJ3*cߊzt.W, {ԑ/W#)=7W8Ebwwi !A-Q&Wʽ!8!8d@BL6(\p O1fZ{%,^ Vɸ7y*h C\na(}h1&m,ԜA0 w¢<ǁV>Ôxj(`<;KQcEg[mCͺx@'?جͺdfH^ëRFc>ၤL#k++#cH”Yr"qDS̸ҔArT>tM$ILXct<.g@` k T; N#!$!g $P'"fYyQMN{^.~s!drL4ki=Rn3tFo{!<9 DHHb0YX:IbRf7Ku+Gݾ*JW$@E([0Z_@  سP [+2?JR{j&NR+P52ԜO|8+dm$ n1Q-*2"o""R"&tTHE7/+_6 ѝ%mg 5C6Z/PY>P ;_HC)vnIJBS-QK)#xך;}* )^3m = }%-jLԞe`U[nZFU"NT!9SsCկT)D ʪqy;Ϸ!/y'V9&h=I+rS@0;oDF4YgEͿxN,̉\a Isc)lASXDyK(_5o2{N [/R~M~\lacqN߬3p'jZfNP:֧kMmNyQ=qdtb~ #ݍ 7nUU'R~HU_W4g+ܡr}٪$ mT}I6/`hk:pck@F*[o|BD"6މdW`%9Oh$5@b[ab,8 oc1 F$j Q N{N,m[eo 6\!F'H-hɽ:;+(k:aarݣGUX^ec,o%~Le7Z 1TcIP{qvw,n/gMęRߕ8h8;ĊܧҹOӭC5>턨օ!vAU[ }V8!ԴV͢RvdN 6`%_Z5djHzɒ[sR~uGvĸU"}w!dg @.h2iM[DXD&1P B|E&_>asEp'XĮvU0=.m]/T> @&IѨo}hPdtG E~v+WΡܓl3Z2:TT3p'24 %=, "HH~!DYd"MbQEu#YՖi5KNJ5w%/w"|c^u1\ɝs5$;٩J(q&T=Nv\ʽNvzQ]Ì`6:סǴqD:\EED;U\ I-ۊmvzw8s^#c^0D;=[A1e 6;7|$ս~{JTLRL8{lQHҘ1&!DjiI]ZxX!4Jz;#Pq|R~lPU_W4F+AiˤmȹU.$8O!ʃH ỳ)1s}fs _| }4w0͝b<d5HNRJ8xIvH+eąʘń:s1%Q\^Ն*x|}tZţl& (!/.a7>^@ߓ愆ޓϣQv>u> gc ^xճ=Ab_o` pWѐ!wOOoNx:f;>u?^Oހ+A=3TSNb$N |EI(XQB)ðz>bnл/~l>QNb>!gk}B9dizջoBB.ٜ:`Cwp.)_6G91sZ%GUPsS .}1&ϣ&.Wf ) R{ ]Ǚ<[%Ni(~tzn0Og⒂p\)|5|9L}J^.ս@r# v20Ew_{]͋< +< #J!o0G?zsx .7-e­ܹ@Us[(qF S=R Nŭxb30%>K#kprvvn(Zud6ベY"Uz/,xUE>~ G3y쳫^g;̿K1ü8E0bc>_$CW}WY?l\zf&&^sAؾ=}wt&sr<9?ޤ9mgc!DAG?dz?3Ln.}L?3t8\6~3d7j3\[|=9 ru r9,ߛ06<}5ʴb4+Vҝ x>)t+ Jx=vᗋ2[NUtζ/Qƽ63cߙ}ײL'+ )7*eZz&P47DzܣL?lLB(ռrKKѻ+?}%t$;j6xSTa=,+I~3G5 B~O/}Ӽy X菿Ovz#0`IOkkO_!\rcGde铠.Sl2=9Z)W 㾒Zf8{_'0 id&D4Nx>?ªȊ%Yo7 2ж>hV"őRGH CDrkVo^tp+r@ h`Q]c,~Z4ȈvA䓣H's-U R 5]Fz6j|E궰@Auuvah|t/{8ə#1 .kBĘ נpab9ښXhK$躖“P\mZ$5t q$T 3=nV . O)dzSХ=hfu=@"m6B@T93ڟb[|'OsnS2Q&VZ1^I(X:,ֆ}BzD5%߳h 4,V][_J-+5!+EwU ]8!dn*k:$l)Y&GL*k'<ӦiQО 8zKR1:W )vJZα1|WBVUl5ؐ[rY٪l⚜g>.cL&Jq8 yᆗCHRkWچ]y[nFA"vl$DGpT,ɣ-!r4+t.Fѕ<FQj_ )akc[` KQ`+c0F1X1; O*|s"%L4Z}Yy!2a'!uP* 7aA3ɒd YLyoZ= `}?*ӋuT-fQe脝X %< (&5voq<@\{?O7w`v8q޽K9fշi Nڝ p0-$'+~IɧVQ݈ڢ0M( ϣ˃(Y*07+$C)iPY.(?t,{Q H'@ 據m*@Q Mc䢡Z!֖!0jZ*Y]Y]PT^צL YìF4o)d8Ǥߞ^F /=_pOVykF}B0ҐLmπB>xH9bj5-;P(*,HhSɎ;KICC@Ꝇ[w>aɕzfav&nW:e!}-kM PyuyRv yϫn+nrdV679ݕRu@HHi4uC#5MQɏ%'̬`=DCwa¨ayǃ< ٍtRr7Fux5R&'5+rC $Nyt1-bk0XK:y (DN]WPI)O"F-7m?̱h?TJ}w)=Ic%" #dZYi2/1qdDoTџRXW #e /0ݩ<7RFj$^H<(N¹ y.Wm܌ԇfݢXhsv @zSƖ!24rmo+nLAnborFQ-fz0*ĬQZ5zR |H"ʤ>;Cz0٩ܫछ3;3.bnC&t8{<> }ut! rfv^<<Z1?}<=Q/El*> ¦⳰XpTLge:siK\JEqRa'2B?{OƑ Y{XD;+:Ʊa:uP22Y A ~FάB5~d7wUz0m,o:Q]0`a@\iZHz3^U0L~gL 5X//Q/{]#E1xC@U!* ׆D%K ss"Bܺ;J9X]"\rBX(ULfu_Wӹ9^u]zzFGL1G,d安U.m)^^ j J, -2Z5ǀ<D(;LjUR 0L X!HrnjN (aTcO^qՉ*yU'_RI"PռżԄ0` N\$(´! 1* 8ɕӋ%@Lm^NP+@*Q VuREK[rX$pD4 s\o(UDR!Yߴ ʹF;$}n'hڗ@q8"ra32c;:DWg% 0A&BDb ,1 p%1`eՠmN_W=CPIʩ*yf0|_RNe_U´_1fd- ߽7H:ލfhpry~#5׳xd3%ƳƇAM *z\/^5`촓F4@uL+|:͍MIx/KTbػf Ǿg&Ӯ=L>` ʹSB /lH+>H ,*mwO!8&,39>kv`nL~>6JZ bf9pI/4M?oМ6̺=wM>mp0f9d+Jo<L}gࣉ[M.sSR%GneE:J平c8v4Ͳߎ/&_;e8{w d¿2cg:\>}tuߝ.{ ?}L܎f%Fj\נFKoz^]}8@9Gw`_WG/zۏ`\ϔ>_e\^W|!2wJ QnN/`z!viOWSC_Աޫp+'>}_.?z ~{?wę~@;x&8h !y9Uu)sn 3ʶy ?]>w. KׯXތ_xauw0tW3` O>;?ؼ?({p8Oя0d/z?n&;XCXx?Q:)Ѧ$J?QI~U6(s?AHϮM2sM>>@~N`_Y4rY~4x_/&TϹ9Xe\ (9_C^y/縷7L2_lP b(Қr aԉ|Z&<8 p&|q$5C-$n1LKĶ("> oxspnߎ>P*=[xHT-2w[,?diX^҈9x1Y=;k#g4n=kF)Vϲ5y9:+Zg_,g;7S*ZtqqYaZp?ҶTSO8w*:aa#cFK> b1T"EHR<.E.p9FfZɳf,K+.}}΅%\H-1f(s䈏TaJ%&@q%Дɯ2J ׆ , a-V ϭ `I5 #ojT ia6i5(:nrnYJZIxp4j-/I%$(n?nGϚ p+_7>w<Ε@!Zf!.׍˘x%UQ¸y8 2 Q℔p`.Ṛ&\VX|cqnLެ`R4 GؐE7a lĆc m&Dr e@ط H 1 fR#\ 1fLp&S LX9^ϗBSAFQ:_B$(U"8j g: mv pcJL0crry5cJ%%DV`LF&{̘s|=c2pO(V\ofL1-FZ[ LUFe2 ^q2לL xsgyu8]+pf b[N#xb3j5zꑀ5s 'z$^~,&J^ e#Xyi.1 .*\7%ˏ*#nx lDI*"Xf2P no'TcZSqBzBZTh@Sv*{jǶ{s]bEXY!-Ж Q-MoQo. FVnʕrٺ'Dk gOPʂ /ZъaY~nbk=QV ۻ*PoA;pN& #d v.sB5r=(H m/.#nF@bR׳WVvp^ka|b')ix$I ($$$!nBo)%ݾR(PrK1赼WqmUFaO; ð׊PlJ'%1;#+>4YIM<:A+#gv꼱Eyk %vI%hwV pdͥO,.FJ`&6Ĝ"s *5Mw\zsp+Yf8 2(8gwi< "PcWD5lJ_c0 !DI@JV;+hwt`zM%LCWL/ui4YrP3E)-%OY,lDzV`^V~ir;]ݴP#) ·j5t;VVC-pfL ^pzxc\i:^4orn'*qBb`X0`;廲 [~i f _jhT ?N ''{}@$H2/|xiL]uk[ تcP3RPU@R;;$EukA~yjP.PcGmYLo٠TQ(7qi4ѓlL|3n 8/Kdl=gx8iR)$El ,y[z@ =wip湬c -=V"*uմq((FG u1E -l%Vdn >~(h tnVҦ#HlNRoy$u-j r!Akn'D2nNC! yno :9܁p;J N55QDҧ 8\ˣ_<ѺGzy޺/d֔xfouGs  3*!,H&AC}X(|!sՉ*]\u(x:=Ty53J>_HDUfN)ߌm@H߾PԶ0X0|7cD |caQP`HED(OyxIA(߄D i$CRO `17 ![Rű՟hV77'zC*HcT ?+ˢhL a+-%ed\׺20k"R欓vݘZ)jV0`w͖ğӼ*caSz9MOGY%qILcuL1OICw$z>kTTS)cGLn#uC4KJ"b{^$_廊䳩k/C-]k+­)wN˨5E b,FԷoRt/u vl9d~hQW۹.`:;^SC;,[chxvƫuS,@r%O5^`\CrySbQTV˃Wù.֥V]ᫎQQ, 6,=ٻ6nlU{~ d(n$[䐓(%ywṞv\A"P37U09qVgFySMOO? /}1CCnʛxB9K'Xɸ<>Y͆,xT.\ @/BuPu@hqsѫ Ў_YPu'Ž /n@@UwR&߭\|ly]0.9 P}gZ^`0]́ϺЛqM۴h+  BqC}`rC%9¥㉿| >^0?ԊU=`:^ |,*UqW@68!+3?H׾df[}~އ>[Ou.bz_ ތ+o<:EeMUԜq#Nyvᗳve)lL ٲ|\cO]Rwa8OR m)h[۲(mKҦe<(Hrn}WdJ ar,FjU{"I-V5yoKv$0Dfew!A=| v;]h$pww>r]n=H)d B"l׊P[o} i,@ʉbd;jZ*ޫMzB[ (.LS3P5Dl;lF1n[7;mpd{߱V@eƴBu%ہb{W Ql?kɎE XL~P&FJKHsEb@ѝ^.8 1Bf`IӠfp)鴑hXY{k0ev39K̷N`>ZZ-,p$+wMm+%tܞ AQ)YqR$d.dJ$ O?9BɄ$c̔Xe%x HA w=2c7Qm9(EL߶$U`dZ[cz&ɎfZaI{D$!džRt \@29tYiGxg?$8|'<ϵ>+Fo{ lf oZWNbt/j=q#KIcĪrD)(b 1}D-(EmYR./B~$R#崕SG,Ǟ8`*c^sN Q]Rpi<$(.mIPHeJ[eJ[J[0/xI%Q\^מY[w )pW-buEOM7&fFp(;%%dD+|\~ hE((%NX 44a+BV|ŠOX +>a'¾+eXFV|ʑ}ȑ1gL< Oeieѧe'@2$/^Ŧ.nj ;b),ˑ91 +ƣ ;29cYqaX,ۖ ὔhb "J 6& cib$DHt:Zn8re%Cꀝ<DRmP-,Beʕ>#Mei½ dM5jLX'JF{.Nl4 M)R΃/aS % RNFHKfJh=Q4sXK-$.`8f $*]it@˕GSAHr'ݢ>{-TT_bJW*VL7?AQ3* tL$ V#mhFYGyкy|N?C۰SmғnSfH&\H]?G Vx w IQHrMD/(n,m?ޞ.`7'vtm&WO'~T֋uHX;y~w^oE5\9:O'Z^0oU]g|=G2wBG@5 XpLE2TqĈȥW-bsZ{ UO–X 8/BJx5ui<)GzsT'O=vjAU6a?M-ZtXDz9KiOy۾:,";@ >h( ag0>5npkfH1QP2%gFZh 쨪u''1O8{Ep.UB|KKi)5-[J%&sl,Ќ W_y(Hk *xQvZxPFQg1Rc mhKy˽5qМCx7&ߘ|c+HU eI^&yV^EAmUl`+*C`*?9<Ʌ)ނɜ0T,Uo`V: irme8 5 =8L|:N'gG]O TINzӛmj@gECz9t$3#d{1 YjCuuU4gE[˲fPdrg3#%ɔCD˜FgGk;iijuFI> 'A$H }Rؗ¾@ ˴\93AGJquG%Cri r9jwVvr+0Ix/~}zo~X-^sCڔb$SbLa CٳB ~! QP9_l<(Ā_;zz"Q$#ndlE ΰIQ] _ '3cg2cFB3`E2[9"5l<|-^}Pvwh}/$JB øB%$L%Q=U2qY5YC$L(i-݁v,FcSKÔF8ҔR+ǭ-(1^ƜYf=G>jGҺe2սKLT9R\0L,1FB1JLa\z,cj+g[ሧsBZ a;9w)h)C $r1(ClD2 ø4>~|22!oQ?+_9Y뎛# Z\J>?UBԅ@G~] v_93:ew4^VZ\P & ]sq7I.!kF:aQf*acH>N1F(P~rãYH1`Si5=| %l9gJ"5+΁|wo,xYɴ߅t^5>81y-:p+;&"4DX T!1ELf/2GV!6c|ob#y |É7xS<?YԵ?qq8Sm۱p}bZpo`Kfr yhMj'6rz_v|S ?T?E\medS.xsȞZD,_B(wGFrLFw9 1IXTX+;+Nc)E)ueY0)R1JJPǫIد29X$TcjS}{F#51A\ʝs3pHJQ(J(G`yc DshIm:- vzj:XEnٯG&\h\vLV'\"ņ WTs4*⾟yvԻL;I u#ldg^V9ŁxqR/a70rQ7g#(( YB0ˆGBn9V+C[l- :_֪i?k_mc!Sfr(NIӃK[,9z|;K% )JɝmnǛ ʿ DZ"T4ը:y`e[%01+| V95UY#;@|)vFA#X0(kLTMeph2`Qǣf V실B`ƻu7*bT1(^/3 HP skdGOSRYQ:D E%>kP!H5dcѕWD2_hidcAqLEĬ&3 8"RE^yt!1ɷjq#)WWVdB6 lozTkT[- Gh.&['.tKE9A!_ /Spm2CŅ]FRl U=ǫ bb#*)6Yh젗9ƨKFaW).)rɽŤ:pyTIlgw ZJN & ;ZPlgR,5_%T;>=fmI>gi|y\f̑ \npWw;~ :~; %r.[0rn7G/Q.G!?F湉/skCq"is^}-ѪwYHcpnčgz]ݸm*Xs 񀥦@WT28r[Vԃ8v+^CUwm'PSlFSZ\ݻ%H͍7 7R^X1^LvBXɯ؃2!نIM75ǫ jK2~ Js9;xt֙w/sIs? mCFӟn0uc bf/Dž(F :0/aH4<.GQ9SnKGv3 }e6VţF Vh[{.Mdo)ݠj$}nM1":MQǺ^S!RS#ir). Q'nE A}ӢFIq & mu-S#5r{5wePcE*6'BqdTQZs.%WbKOl S36 YN5*Ecy{.[xsbM,8jZ;0v&$e$׼"[zYJfwm8iI7Tf<±NNs˛xDx`4ݡKI}_DyqęGt$:?/s^VFI+vfVu9T*f 3 @d/S3k7>u87LC%r+*zS!w#4S 4Ue>VAqI*Mhd9Z*JwY*j.Zs. lZ$7*5k %+< rZ0Bv؈`]iX%sF cbx|3AL%Q@8L:^Q8>lhs%=\nr r7򉆭[0QDP*˳\Ip }~rM 立.J7W=_] c+kqTiyWߨ9DCEpb\ﱱG0߃4};Nr"k?" iJiƕީoHI%6D\W|{ckXV;<;d? 1-8j-lfYHS<*OXg'q[%.އ㬊yvm]gʳcE* %d:Lh Ǔ*5Y*^-^#w AP/0<ޜ(2 jCx6&P!YHpƏ^|H8͡ᨠ[ urv*i/LD"C0YE Ӈ՘|Yi`/R|lJ İ hÃ$zAӜNgָR)sZmMQQ5Fdy Qm m ^qO.*6 .#yhɤ8Ji KXeL KS ,=/q]kA %in v(Yt6RW"yǓia{O.`b4I8aMIG{DA!=EK(4)R2=&2yAͨN2b +ǜ_!|Tj~KZs֏~h>he(ũYd12Ci 'C"ײ$f'Be$1넚 '+V p̓3$qG1GÅב= $~frA / $%?i-✎VzvS8N+J$p3 Oip#:қH vQ=6 Nj}Ƨ= g MLDАhL7F қBgwB&0)3Fe;˴4MGC(RIa`S֒ 5 .khš p pF8*zۋݗ0dXJFJUwJ -OW@j|q@KG\Y@Fcs[ @ _r@`}s -clU'q ;ba:b0U"h2d%wA ; iʏ3*> LGq剰T}ec:+N'o?pdA&LfL@j9[Ѧ@t%fa8{@0SqgP ҭjtDD`p u \VN. %"zjtYD:3ggE;0dD~/LO۠ 6DDPDE˔%#ܣf]A @q!2UZ n@ EgKmoƾg%Fhb>Kc3E+e~ w^39B*skpW1Y>hSk$G"һRn1\K]>ueFdqyrf”{D/(a_U5l{E-yewڭi.'Z2,}^P>'<"6(L:Q.i&F@-` =\ "{Jn o"D:JJ!$"XJU:әaقd?Y&8aM!nS6R qu* 5W-Ł@L`8VamF{U[*z4WXX^GU5JU7?ޝ_A-깠BvzD*F%w~-q?8xz_RXXpL@5+# 0Jc'^u2tj+Zv 21s?ƱW7؂Ňï^G+M7ퟚ!b_x6BI<-ڡn=s-Rbs'jAG7,ry9LgO&Gumsx/^R/|rd1~} a]|zptS$93|vN¿t>k%gP/rpF>`iM_h_l,qS(o(}\ mQfp͑sq2 g /{2qvPc!X~fKG(ЏOʣB`}q_zt~hJZ93!zw O&~z+-l)N~=L sk<1Bheə?]7薷w*>,bf>" .5͗>:>nW'ԏrߚ q"B0d~h%Z_a\CvT &qpG~=~%:6;OqvG qN,U +\I#/as"ɒWlU!'uaGG4Ehm?AxK}ۄ%Wxh4MM2_>%'6)x$N4$DSr\X\o= ({WϏ5n[00l.g*GbYKɇf1`{ E7Gi/ F _4lEoOxozȑ_i%fZ}",9f^}ؒ3Am[6[-_3Dđh6YUdqf:?|4Q eBgF0BMUgvyp5jN2Bg'~tȢ]6ZR&G#Ω*lňse l`'ɋOW| 1e̓)/nRb,uk 3W_* O-oMSG(qv-vo繕s+|634LRP҆W6v`6QB(yE ;F EO6:w](, sa0( <+̯ 8C]Tz\~|@Cp@ތ`VKCIAT vܐjF /n@v\XB)JIFJUX3T&uFMQf&XӍм8Ȥ>WIk0 Y}FxftR.kտ7ea>fc<RaWߟӓ$UDof}q|iKE.` ""E8[|f(1/SRBo'?`$PbA$lHѹ1$aZ |k>i4hLj_N&0X|(2tk>:Č:;}6|҆2~GyM$ fۑ0JD͜a ŐdȩOēӟOC._zV0-[Wi׮׸ΓoY1Y$yjd[T-W[ ,t:::%L0 bZUeEV{@2R \Ђ9,wnkǤ2sg0g]feLs=j|u ^;[Zae[_9vei&֠t3z0z'Ӆì0cu `*scXM|O0q] jgL,U9]G'3F;ϵ&nL.7&::߳yi'G*!uS@ E޺'9=hTKAGNMֈv0LQ3 -a_d{1L],'1-TԳb #(LݗHoJi3:,_>j`k7'mkŋ5N]l<\(:/[p#W8(#mN@o'H54r U<qMt٠ޫ=:Pj*;iʫe~KFS!ft"P$)+d ^Dҍ[W>/-Ak;EYvnkoKڜ~Ɉ.[x $.=ٕn"/i | Lky{I_V@k/] Hc+szk\ֆ~;ǶmkS]ڸ?ONέtyQf-)_;{®3N^N}|lS~cܝۇ_?oW]zovG kV6j07^2 y@[_NgtA%,O=ֿ}K BB"%S"PĐs7i:PuV\JX7 43xq~<gd]WBn~/vb7 b6Qn8x~e[v;S/pTf|l2+G״Ğ_Gfu$j-#o^+5Jnz'n==rlm9#*RCHIb͗ds ~*gZέuL4=LweM^LluH_Yۏ# Zdwf-Pb:efBJu^TH^dYɭ`37}oˋEj3`/tZԪ^VZj5J[NC!/(PnHY ]"J#p)@\L 420jJ°RF#bcƈݰZ a?.GэȾ.ǡ]&Ԫ=DcZj5ըV;U}g5qxgE(Q&L'+ f.|UD2e" e6QFs 1D"Jdݜ!Æ]x]_t{lxpF#AcKuc{Uq5ӪbnvwUx.K23UWBȈjZJJi2Y(eҬ|?BVػ}St@?FDnߦ437}.(նpltm[H;\H5wG7fw&|>q> @=(PzJ\fdXf*2 ]HZ lX gSPq6LWsU#*C{@݉-5G9b uv7.a]-˒UH C8*gL=BGO>z=yP!D(Q&L'4 e8[  #D(s/us<var/home/core/zuul-output/logs/kubelet.log0000644000000000000000001724750115136775737017731 0ustar rootrootJan 30 00:10:18 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 00:10:20 crc kubenswrapper[5104]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.198291 5104 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204225 5104 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204262 5104 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204272 5104 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204281 5104 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204291 5104 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204299 5104 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204306 5104 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204314 5104 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204321 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204328 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204339 5104 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204349 5104 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204357 5104 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204365 5104 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204374 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204382 5104 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204389 5104 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204398 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204406 5104 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204413 5104 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204421 5104 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204446 5104 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204454 5104 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204462 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204470 5104 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204477 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204484 5104 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204492 5104 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204498 5104 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204506 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204513 5104 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204525 5104 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204534 5104 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204542 5104 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204549 5104 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204556 5104 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204564 5104 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204571 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204579 5104 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204586 5104 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204593 5104 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204600 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204607 5104 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204614 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204621 5104 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204630 5104 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204637 5104 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204645 5104 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204652 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204661 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204669 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204676 5104 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204684 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204690 5104 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204697 5104 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204705 5104 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.204712 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205337 5104 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205361 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205371 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205381 5104 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205391 5104 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205401 5104 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205410 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205419 5104 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205429 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205438 5104 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205447 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205457 5104 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205465 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205473 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205481 5104 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205490 5104 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205499 5104 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205508 5104 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205520 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205529 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205540 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205548 5104 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205572 5104 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205581 5104 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205589 5104 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205598 5104 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205606 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205614 5104 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.205622 5104 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208446 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208476 5104 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208486 5104 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208494 5104 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208502 5104 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208510 5104 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208517 5104 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208525 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208532 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208540 5104 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208547 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208554 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208561 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208568 5104 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208575 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208582 5104 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208590 5104 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208600 5104 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208608 5104 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208615 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208622 5104 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208631 5104 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208638 5104 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208646 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208652 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208660 5104 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208667 5104 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208674 5104 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208681 5104 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208690 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208698 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208705 5104 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208712 5104 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208720 5104 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208727 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208734 5104 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208741 5104 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208748 5104 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208755 5104 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208762 5104 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208769 5104 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208776 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208783 5104 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208791 5104 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208798 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208805 5104 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208812 5104 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208819 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208830 5104 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208838 5104 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208903 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208912 5104 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208920 5104 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208928 5104 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208963 5104 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208971 5104 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208978 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208986 5104 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.208993 5104 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209001 5104 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209008 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209016 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209024 5104 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209032 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209039 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209046 5104 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209053 5104 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209061 5104 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209067 5104 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209078 5104 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209086 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209093 5104 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209100 5104 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209110 5104 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209117 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209125 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209132 5104 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209139 5104 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209146 5104 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209153 5104 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209160 5104 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209168 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209178 5104 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209185 5104 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209192 5104 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.209201 5104 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209400 5104 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209418 5104 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209432 5104 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209443 5104 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209455 5104 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209465 5104 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209476 5104 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209486 5104 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209495 5104 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209504 5104 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209513 5104 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209521 5104 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209530 5104 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209538 5104 flags.go:64] FLAG: --cgroup-root="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209546 5104 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209556 5104 flags.go:64] FLAG: --client-ca-file="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209564 5104 flags.go:64] FLAG: --cloud-config="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209572 5104 flags.go:64] FLAG: --cloud-provider="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209579 5104 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209591 5104 flags.go:64] FLAG: --cluster-domain="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209599 5104 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209607 5104 flags.go:64] FLAG: --config-dir="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209615 5104 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209626 5104 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209637 5104 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209645 5104 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209653 5104 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209661 5104 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209673 5104 flags.go:64] FLAG: --contention-profiling="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209681 5104 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209689 5104 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209698 5104 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209706 5104 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209716 5104 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209725 5104 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209733 5104 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209741 5104 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209750 5104 flags.go:64] FLAG: --enable-server="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209757 5104 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209768 5104 flags.go:64] FLAG: --event-burst="100" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209777 5104 flags.go:64] FLAG: --event-qps="50" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209785 5104 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209794 5104 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209802 5104 flags.go:64] FLAG: --eviction-hard="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209812 5104 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209820 5104 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209828 5104 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209836 5104 flags.go:64] FLAG: --eviction-soft="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209844 5104 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209882 5104 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209890 5104 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209898 5104 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209906 5104 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209914 5104 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209922 5104 flags.go:64] FLAG: --feature-gates="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209932 5104 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209941 5104 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209950 5104 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209958 5104 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209966 5104 flags.go:64] FLAG: --healthz-port="10248" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209975 5104 flags.go:64] FLAG: --help="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209984 5104 flags.go:64] FLAG: --hostname-override="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.209992 5104 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210000 5104 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210008 5104 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210016 5104 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210023 5104 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210033 5104 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210041 5104 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210049 5104 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210056 5104 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210064 5104 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210074 5104 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210081 5104 flags.go:64] FLAG: --kube-reserved="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210090 5104 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210098 5104 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210106 5104 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210114 5104 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210122 5104 flags.go:64] FLAG: --lock-file="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210130 5104 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210138 5104 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210146 5104 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210159 5104 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210167 5104 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210175 5104 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210183 5104 flags.go:64] FLAG: --logging-format="text" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210191 5104 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210200 5104 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210209 5104 flags.go:64] FLAG: --manifest-url="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210216 5104 flags.go:64] FLAG: --manifest-url-header="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210228 5104 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210236 5104 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210247 5104 flags.go:64] FLAG: --max-pods="110" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210255 5104 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210264 5104 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210272 5104 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210280 5104 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210288 5104 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210296 5104 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210304 5104 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210323 5104 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210331 5104 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210339 5104 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210347 5104 flags.go:64] FLAG: --pod-cidr="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210356 5104 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210370 5104 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210377 5104 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210386 5104 flags.go:64] FLAG: --pods-per-core="0" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210394 5104 flags.go:64] FLAG: --port="10250" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210402 5104 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210410 5104 flags.go:64] FLAG: --provider-id="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210418 5104 flags.go:64] FLAG: --qos-reserved="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210426 5104 flags.go:64] FLAG: --read-only-port="10255" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210434 5104 flags.go:64] FLAG: --register-node="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210442 5104 flags.go:64] FLAG: --register-schedulable="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210450 5104 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210464 5104 flags.go:64] FLAG: --registry-burst="10" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210472 5104 flags.go:64] FLAG: --registry-qps="5" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210480 5104 flags.go:64] FLAG: --reserved-cpus="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210488 5104 flags.go:64] FLAG: --reserved-memory="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210498 5104 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210506 5104 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210514 5104 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210522 5104 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210530 5104 flags.go:64] FLAG: --runonce="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210538 5104 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210546 5104 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210555 5104 flags.go:64] FLAG: --seccomp-default="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210563 5104 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210571 5104 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210579 5104 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210587 5104 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210595 5104 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210604 5104 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210612 5104 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210620 5104 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210628 5104 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210636 5104 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210645 5104 flags.go:64] FLAG: --system-cgroups="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210653 5104 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210666 5104 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210674 5104 flags.go:64] FLAG: --tls-cert-file="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210681 5104 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210692 5104 flags.go:64] FLAG: --tls-min-version="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210700 5104 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210708 5104 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210716 5104 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210724 5104 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210732 5104 flags.go:64] FLAG: --v="2" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210742 5104 flags.go:64] FLAG: --version="false" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210753 5104 flags.go:64] FLAG: --vmodule="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210763 5104 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.210776 5104 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211007 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211018 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211028 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211039 5104 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211047 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211054 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211062 5104 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211069 5104 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211077 5104 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211085 5104 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211092 5104 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211102 5104 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211111 5104 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211120 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211128 5104 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211136 5104 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211144 5104 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211151 5104 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211159 5104 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211167 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211174 5104 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211182 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211190 5104 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211198 5104 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211206 5104 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211214 5104 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211222 5104 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211231 5104 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211239 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211247 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211255 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211264 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211271 5104 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211278 5104 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211288 5104 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211301 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211309 5104 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211317 5104 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211324 5104 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211331 5104 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211339 5104 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211346 5104 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211353 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211360 5104 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211368 5104 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211376 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211384 5104 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211391 5104 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211399 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211406 5104 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211414 5104 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211421 5104 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211428 5104 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211436 5104 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211443 5104 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211450 5104 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211457 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211465 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211472 5104 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211480 5104 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211487 5104 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211495 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211502 5104 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211509 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211516 5104 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211524 5104 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211531 5104 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211541 5104 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211548 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211556 5104 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211563 5104 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211570 5104 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211577 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211585 5104 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211592 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211599 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211606 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211613 5104 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211620 5104 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211629 5104 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211636 5104 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211643 5104 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211650 5104 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211658 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211665 5104 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.211673 5104 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.213107 5104 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.228015 5104 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.228084 5104 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228198 5104 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228221 5104 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228231 5104 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228241 5104 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228251 5104 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228259 5104 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228267 5104 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228274 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228281 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228289 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228297 5104 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228305 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228312 5104 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228323 5104 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228335 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228345 5104 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228354 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228361 5104 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228369 5104 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228378 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228385 5104 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228393 5104 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228400 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228407 5104 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228415 5104 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228422 5104 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228430 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228438 5104 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228445 5104 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228453 5104 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228460 5104 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228470 5104 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228477 5104 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228485 5104 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228492 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228500 5104 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228507 5104 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228515 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228522 5104 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228529 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228536 5104 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228543 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228551 5104 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228558 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228565 5104 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228574 5104 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228583 5104 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228591 5104 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228600 5104 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228608 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228615 5104 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228623 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228630 5104 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228638 5104 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228645 5104 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228652 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228659 5104 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228667 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228675 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228685 5104 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228698 5104 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228706 5104 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228713 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228720 5104 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228729 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228737 5104 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228744 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228755 5104 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228762 5104 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228769 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228777 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228784 5104 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228792 5104 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228800 5104 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228807 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228814 5104 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228821 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228828 5104 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228835 5104 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228843 5104 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228882 5104 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228892 5104 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228902 5104 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228911 5104 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228921 5104 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.228930 5104 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.228969 5104 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229222 5104 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229237 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229246 5104 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229254 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229262 5104 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229270 5104 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229278 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229285 5104 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229292 5104 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229299 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229308 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229316 5104 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229324 5104 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229332 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229339 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229348 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229355 5104 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229362 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229369 5104 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229377 5104 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229384 5104 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229392 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229399 5104 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229406 5104 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229413 5104 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229423 5104 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229434 5104 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229443 5104 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229451 5104 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229459 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229467 5104 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229475 5104 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229483 5104 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229491 5104 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229498 5104 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229505 5104 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229513 5104 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229520 5104 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229527 5104 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229534 5104 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229541 5104 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229549 5104 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229559 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229568 5104 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229576 5104 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229584 5104 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229592 5104 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229601 5104 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229608 5104 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229616 5104 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229623 5104 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229631 5104 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229642 5104 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229651 5104 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229659 5104 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229666 5104 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229673 5104 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229681 5104 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229688 5104 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229695 5104 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229703 5104 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229710 5104 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229717 5104 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229725 5104 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229733 5104 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229741 5104 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229749 5104 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229756 5104 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229765 5104 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229772 5104 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229780 5104 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229787 5104 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229794 5104 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229801 5104 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229809 5104 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229816 5104 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229825 5104 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229832 5104 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229840 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229876 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229884 5104 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229891 5104 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229899 5104 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229906 5104 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229914 5104 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:20 crc kubenswrapper[5104]: W0130 00:10:20.229921 5104 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.229934 5104 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.230985 5104 server.go:962] "Client rotation is on, will bootstrap in background" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.236580 5104 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.240574 5104 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.240719 5104 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.242066 5104 server.go:1019] "Starting client certificate rotation" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.242200 5104 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.242272 5104 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.270727 5104 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.275510 5104 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.275714 5104 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.299578 5104 log.go:25] "Validated CRI v1 runtime API" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.348488 5104 log.go:25] "Validated CRI v1 image API" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.350296 5104 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.353477 5104 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-30-00-04-02-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.353509 5104 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.373540 5104 manager.go:217] Machine: {Timestamp:2026-01-30 00:10:20.371220391 +0000 UTC m=+1.103559650 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:6d24271c-4d6f-4082-96cf-a2854971c0dc BootID:ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f1:d5:c9 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f1:d5:c9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4c:fc:5c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ea:a6:de Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:ca:f3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:76:04:71 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:e2:89:31:90:35 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:b6:f6:1f:41:6f Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.373808 5104 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.373984 5104 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.376879 5104 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.376929 5104 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.377190 5104 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.377206 5104 container_manager_linux.go:306] "Creating device plugin manager" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.377231 5104 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.378454 5104 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.378773 5104 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.379270 5104 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.384908 5104 kubelet.go:491] "Attempting to sync node with API server" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.384932 5104 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.384959 5104 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.384979 5104 kubelet.go:397] "Adding apiserver pod source" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.385030 5104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.389835 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.389947 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.390769 5104 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.390805 5104 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.401611 5104 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.401668 5104 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.405526 5104 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.405884 5104 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.406752 5104 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.407966 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408014 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408031 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408053 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408068 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408082 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408096 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408110 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408130 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408157 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408176 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.408724 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.410034 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.410070 5104 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.412546 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.437444 5104 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.437558 5104 server.go:1295] "Started kubelet" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.437896 5104 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.438063 5104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.438253 5104 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.439040 5104 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 00:10:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.441637 5104 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.441933 5104 server.go:317] "Adding debug handlers to kubelet server" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.441997 5104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.441453 5104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59b6d8e1ca78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,LastTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.443092 5104 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.443139 5104 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.443500 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.443639 5104 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.445104 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.447338 5104 factory.go:55] Registering systemd factory Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.447560 5104 factory.go:223] Registration of the systemd container factory successfully Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.448148 5104 factory.go:153] Registering CRI-O factory Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.448188 5104 factory.go:223] Registration of the crio container factory successfully Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.448187 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.448297 5104 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.448386 5104 factory.go:103] Registering Raw factory Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.448417 5104 manager.go:1196] Started watching for new ooms in manager Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.449415 5104 manager.go:319] Starting recovery of all containers Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.500000 5104 manager.go:324] Recovery completed Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516240 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516306 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516320 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516332 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516344 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516356 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516367 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516380 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516394 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516405 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516415 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516426 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516464 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516477 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516495 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516510 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516522 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516534 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516543 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516552 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516561 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516570 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516580 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516591 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516600 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516618 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516650 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516659 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516675 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516686 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516696 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516706 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516716 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516727 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516736 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516745 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516754 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516767 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516780 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516793 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516803 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516812 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516822 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516831 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.516840 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518023 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518082 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518114 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518142 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518165 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518192 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518220 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518246 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518271 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518297 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518325 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518375 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518405 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518430 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518457 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518483 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518511 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518535 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518561 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518582 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518601 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518622 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518641 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518661 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518679 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518699 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518718 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518745 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518770 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518796 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518819 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518890 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.518950 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519008 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519034 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519062 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519088 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519114 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519138 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519160 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519184 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519212 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519236 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519268 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519295 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519320 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519345 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519369 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519393 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519417 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519443 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519467 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519491 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519516 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519541 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519569 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519562 5104 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519619 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519677 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519721 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519750 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519775 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519800 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519824 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519885 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519914 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519942 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.519969 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520023 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520051 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520114 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520133 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520151 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520170 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520190 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520208 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520228 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520247 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520266 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520284 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520303 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520322 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520340 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520385 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520404 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520422 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520474 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520496 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520513 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520532 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520550 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520567 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520587 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.520607 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522604 5104 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522653 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522677 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522697 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522715 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522736 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522755 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522805 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522826 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522844 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522947 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.522975 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523003 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523029 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523054 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523073 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523091 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523112 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523120 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523129 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523300 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523323 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523341 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523359 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523379 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523398 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523416 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523437 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523455 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523474 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523494 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523527 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523545 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523563 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523581 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523601 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523621 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523640 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523660 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523678 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523698 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523717 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523735 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523754 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523773 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523791 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523811 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523831 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523896 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523919 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523937 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523958 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523977 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523998 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524025 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.523931 5104 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524051 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524080 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524107 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524128 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524149 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524178 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524197 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524216 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524249 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524268 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524088 5104 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524418 5104 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524443 5104 kubelet.go:2451] "Starting kubelet main sync loop" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524289 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524527 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.524510 5104 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524570 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524596 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524626 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524648 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524670 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524693 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524712 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524732 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524754 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524782 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524808 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524834 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524904 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.524993 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525029 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525058 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525110 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525141 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525171 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525183 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525196 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525224 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525227 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525317 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525343 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525369 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525395 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525420 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525240 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525449 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525480 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525504 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525529 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525555 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525584 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525609 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525634 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525658 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525681 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525706 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525730 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525753 5104 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525776 5104 reconstruct.go:97] "Volume reconstruction finished" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.525792 5104 reconciler.go:26] "Reconciler: start to sync state" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.528231 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.529252 5104 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.529261 5104 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.529282 5104 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.539524 5104 policy_none.go:49] "None policy: Start" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.539587 5104 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.539611 5104 state_mem.go:35] "Initializing new in-memory state store" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.545665 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.604603 5104 manager.go:341] "Starting Device Plugin manager" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.604840 5104 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.604885 5104 server.go:85] "Starting device plugin registration server" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.605346 5104 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.605365 5104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.605560 5104 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.605632 5104 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.605640 5104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.609159 5104 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.609232 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.625383 5104 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.625694 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.626589 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.626633 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.626647 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.627459 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.628005 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.628076 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.628116 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629009 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629358 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629490 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629523 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629557 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629571 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629811 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.629947 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.630670 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.630726 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.630767 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.630977 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.631122 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.631185 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.632903 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.632923 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.632984 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633011 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633046 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633072 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633692 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633719 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.633731 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.641671 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.641742 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.641829 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.643838 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.643997 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.645406 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.645669 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.645701 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.645718 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.647030 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.647083 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.647777 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.647811 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.647827 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.648913 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.680038 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.690187 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.707356 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.708576 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.708699 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.708719 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.708753 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.709393 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.714423 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.729260 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.729408 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.729441 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.729512 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.729796 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730036 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730089 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730129 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730162 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730195 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730335 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730366 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730392 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730530 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730627 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730680 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730730 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730768 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730807 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730836 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730884 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730908 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730931 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.730968 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.731000 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.731150 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.731257 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.731281 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.731296 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.732421 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.742877 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.748790 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832373 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832459 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832489 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832503 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832576 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832626 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832648 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832659 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832678 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832664 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832698 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832702 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832713 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832903 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832730 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832966 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832939 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.832998 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833011 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833048 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833060 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833074 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833107 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833109 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833137 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833151 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833163 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833160 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833195 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833268 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833291 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.833361 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.909665 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.911039 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.911132 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.911147 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.911190 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: E0130 00:10:20.912032 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.982140 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5104]: I0130 00:10:20.991515 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.016670 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:21 crc kubenswrapper[5104]: W0130 00:10:21.042966 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-4aa2811469b2b2492645838bb4226e0639e10d0f8d8251d317184ddb88aeb170 WatchSource:0}: Error finding container 4aa2811469b2b2492645838bb4226e0639e10d0f8d8251d317184ddb88aeb170: Status 404 returned error can't find the container with id 4aa2811469b2b2492645838bb4226e0639e10d0f8d8251d317184ddb88aeb170 Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.043289 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.049348 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.050190 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 30 00:10:21 crc kubenswrapper[5104]: W0130 00:10:21.051455 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-bcf1970a71a68ce485c1fa48dd204eca61a3fd772d372634a0fc85ae95c65ab7 WatchSource:0}: Error finding container bcf1970a71a68ce485c1fa48dd204eca61a3fd772d372634a0fc85ae95c65ab7: Status 404 returned error can't find the container with id bcf1970a71a68ce485c1fa48dd204eca61a3fd772d372634a0fc85ae95c65ab7 Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.061674 5104 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:10:21 crc kubenswrapper[5104]: W0130 00:10:21.072200 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-e17547a9a44b7de835239fe57135cb8976fea9d48779f8e00cc9c2c41fedfaf3 WatchSource:0}: Error finding container e17547a9a44b7de835239fe57135cb8976fea9d48779f8e00cc9c2c41fedfaf3: Status 404 returned error can't find the container with id e17547a9a44b7de835239fe57135cb8976fea9d48779f8e00cc9c2c41fedfaf3 Jan 30 00:10:21 crc kubenswrapper[5104]: W0130 00:10:21.073401 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-3c2d8f379242d9a58531ee261130cefdbe47daf7baa51b0fdd5ab81245a69bc9 WatchSource:0}: Error finding container 3c2d8f379242d9a58531ee261130cefdbe47daf7baa51b0fdd5ab81245a69bc9: Status 404 returned error can't find the container with id 3c2d8f379242d9a58531ee261130cefdbe47daf7baa51b0fdd5ab81245a69bc9 Jan 30 00:10:21 crc kubenswrapper[5104]: W0130 00:10:21.083502 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-9744e580fba6dd84cb989c5a1bb6328c0cf319edede5370e8c7652b83a8c2083 WatchSource:0}: Error finding container 9744e580fba6dd84cb989c5a1bb6328c0cf319edede5370e8c7652b83a8c2083: Status 404 returned error can't find the container with id 9744e580fba6dd84cb989c5a1bb6328c0cf319edede5370e8c7652b83a8c2083 Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.312980 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.314409 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.314441 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.314475 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.314503 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.315006 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.413741 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.413829 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.422644 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.521682 5104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59b6d8e1ca78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,LastTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.533922 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4aa2811469b2b2492645838bb4226e0639e10d0f8d8251d317184ddb88aeb170"} Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.534918 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bcf1970a71a68ce485c1fa48dd204eca61a3fd772d372634a0fc85ae95c65ab7"} Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.535960 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"9744e580fba6dd84cb989c5a1bb6328c0cf319edede5370e8c7652b83a8c2083"} Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.538048 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3c2d8f379242d9a58531ee261130cefdbe47daf7baa51b0fdd5ab81245a69bc9"} Jan 30 00:10:21 crc kubenswrapper[5104]: I0130 00:10:21.538817 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e17547a9a44b7de835239fe57135cb8976fea9d48779f8e00cc9c2c41fedfaf3"} Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.552424 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.562341 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:21 crc kubenswrapper[5104]: E0130 00:10:21.850888 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.115998 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.117029 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.117075 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.117087 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.117113 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.117572 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.364130 5104 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.365506 5104 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.413814 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.545790 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591" exitCode=0 Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.545929 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.546039 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548010 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548042 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548055 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.548266 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548445 5104 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711" exitCode=0 Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548519 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.548704 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.549721 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550207 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550372 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550396 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.550594 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550897 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550916 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550925 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.550974 5104 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11" exitCode=0 Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.551036 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.551217 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.552492 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.552530 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.552547 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.552825 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556026 5104 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3" exitCode=0 Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556079 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556152 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556743 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556806 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.556834 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:22 crc kubenswrapper[5104]: E0130 00:10:22.557225 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.559660 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.559726 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994"} Jan 30 00:10:22 crc kubenswrapper[5104]: I0130 00:10:22.559762 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd"} Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.284125 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.413874 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.425223 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.452252 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.499829 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.566124 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.566307 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.567886 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.567955 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.567981 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.568329 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.570544 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.570622 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.575114 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.575242 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.576229 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.576282 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.576302 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.576610 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.579590 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.579639 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.583442 5104 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a" exitCode=0 Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.583489 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a"} Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.583711 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.584596 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.584661 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.584702 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.585113 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.712528 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.718002 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.718871 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.718910 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.718924 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:23 crc kubenswrapper[5104]: I0130 00:10:23.718952 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:23 crc kubenswrapper[5104]: E0130 00:10:23.719402 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.413821 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.590204 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7"} Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.590485 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.591798 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.591869 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.591888 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:24 crc kubenswrapper[5104]: E0130 00:10:24.592204 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.594791 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9f72958df56798decb413106fe0178e5c520438b258eda1d391514fbf1aefed5"} Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.594874 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b"} Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.594886 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e"} Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.595074 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.595846 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.595968 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.595997 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:24 crc kubenswrapper[5104]: E0130 00:10:24.596391 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.596418 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af"} Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.596388 5104 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af" exitCode=0 Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.596607 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.596725 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.596951 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597134 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597155 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597164 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:24 crc kubenswrapper[5104]: E0130 00:10:24.597327 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597825 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597860 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597870 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597891 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597929 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:24 crc kubenswrapper[5104]: I0130 00:10:24.597957 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:24 crc kubenswrapper[5104]: E0130 00:10:24.598407 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:24 crc kubenswrapper[5104]: E0130 00:10:24.598789 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.413885 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.604663 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5"} Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.604723 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4"} Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.604743 5104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.604811 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.604946 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605009 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605451 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605474 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605484 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605494 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605516 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:25 crc kubenswrapper[5104]: I0130 00:10:25.605530 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:25 crc kubenswrapper[5104]: E0130 00:10:25.605862 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:25 crc kubenswrapper[5104]: E0130 00:10:25.606191 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.240662 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.241175 5104 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.241234 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.267880 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.268074 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.268879 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.268915 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.268926 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.269217 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.413674 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.436796 5104 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.515582 5104 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.611353 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f"} Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.611538 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.612710 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.612763 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.612782 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.613384 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.653402 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="6.4s" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.818502 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.912936 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.920477 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.923014 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.923091 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.923118 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:26 crc kubenswrapper[5104]: I0130 00:10:26.923240 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:26 crc kubenswrapper[5104]: E0130 00:10:26.925380 5104 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.005333 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.005589 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.006381 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.006417 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.006426 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:27 crc kubenswrapper[5104]: E0130 00:10:27.006723 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.053627 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.414241 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.616532 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.618684 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9f72958df56798decb413106fe0178e5c520438b258eda1d391514fbf1aefed5" exitCode=255 Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.618783 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9f72958df56798decb413106fe0178e5c520438b258eda1d391514fbf1aefed5"} Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.619166 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.619836 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.619880 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.619892 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:27 crc kubenswrapper[5104]: E0130 00:10:27.620224 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.620704 5104 scope.go:117] "RemoveContainer" containerID="9f72958df56798decb413106fe0178e5c520438b258eda1d391514fbf1aefed5" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.628636 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394"} Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.628771 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64"} Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.628790 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.628895 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.629007 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.629524 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.629596 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.629617 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:27 crc kubenswrapper[5104]: E0130 00:10:27.630240 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.631199 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.631298 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:27 crc kubenswrapper[5104]: I0130 00:10:27.631322 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:27 crc kubenswrapper[5104]: E0130 00:10:27.631830 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.414076 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.454014 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.454257 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.454917 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.454955 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.454975 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:28 crc kubenswrapper[5104]: E0130 00:10:28.455291 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.632185 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.633457 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926"} Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.633613 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.633611 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634056 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634171 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634194 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634204 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634290 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634307 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.634318 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:28 crc kubenswrapper[5104]: E0130 00:10:28.634591 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:28 crc kubenswrapper[5104]: E0130 00:10:28.634890 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.635400 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.635470 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:28 crc kubenswrapper[5104]: I0130 00:10:28.635490 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:28 crc kubenswrapper[5104]: E0130 00:10:28.636042 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:28 crc kubenswrapper[5104]: E0130 00:10:28.718845 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.267417 5104 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.267524 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.532586 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.639422 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.640369 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.642865 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" exitCode=255 Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.642974 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926"} Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.643027 5104 scope.go:117] "RemoveContainer" containerID="9f72958df56798decb413106fe0178e5c520438b258eda1d391514fbf1aefed5" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.643113 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.643190 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644189 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644249 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644270 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644567 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644633 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.644652 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:29 crc kubenswrapper[5104]: E0130 00:10:29.644807 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:29 crc kubenswrapper[5104]: E0130 00:10:29.645514 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:29 crc kubenswrapper[5104]: I0130 00:10:29.646340 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:29 crc kubenswrapper[5104]: E0130 00:10:29.646794 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.526816 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.527180 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.528346 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.528417 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.528438 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:30 crc kubenswrapper[5104]: E0130 00:10:30.529281 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:30 crc kubenswrapper[5104]: E0130 00:10:30.609439 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.647741 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.654438 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.655494 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.655554 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.655574 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:30 crc kubenswrapper[5104]: E0130 00:10:30.656006 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:30 crc kubenswrapper[5104]: I0130 00:10:30.656323 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:30 crc kubenswrapper[5104]: E0130 00:10:30.656530 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:31 crc kubenswrapper[5104]: I0130 00:10:31.473230 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:31 crc kubenswrapper[5104]: I0130 00:10:31.473696 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:31 crc kubenswrapper[5104]: I0130 00:10:31.474985 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:31 crc kubenswrapper[5104]: I0130 00:10:31.475061 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:31 crc kubenswrapper[5104]: I0130 00:10:31.475082 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:31 crc kubenswrapper[5104]: E0130 00:10:31.476017 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:33 crc kubenswrapper[5104]: I0130 00:10:33.326468 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:33 crc kubenswrapper[5104]: I0130 00:10:33.327642 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:33 crc kubenswrapper[5104]: I0130 00:10:33.327682 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:33 crc kubenswrapper[5104]: I0130 00:10:33.327698 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:33 crc kubenswrapper[5104]: I0130 00:10:33.327723 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.686501 5104 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.801043 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.801412 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.802611 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.802710 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.802730 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:34 crc kubenswrapper[5104]: E0130 00:10:34.803484 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:34 crc kubenswrapper[5104]: I0130 00:10:34.804028 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:34 crc kubenswrapper[5104]: E0130 00:10:34.804394 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:37 crc kubenswrapper[5104]: I0130 00:10:37.574306 5104 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:37 crc kubenswrapper[5104]: I0130 00:10:37.574707 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:37 crc kubenswrapper[5104]: I0130 00:10:37.581480 5104 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:37 crc kubenswrapper[5104]: I0130 00:10:37.581569 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.635366 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.635579 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.636655 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.636728 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.636744 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:38 crc kubenswrapper[5104]: E0130 00:10:38.637395 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:38 crc kubenswrapper[5104]: I0130 00:10:38.637820 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:38 crc kubenswrapper[5104]: E0130 00:10:38.638141 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.268267 5104 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.268388 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.650153 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.650438 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.651426 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.651491 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5104]: I0130 00:10:39.651520 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5104]: E0130 00:10:39.652174 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.564674 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.565036 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.566125 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.566193 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.566213 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:40 crc kubenswrapper[5104]: E0130 00:10:40.566755 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.585926 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:40 crc kubenswrapper[5104]: E0130 00:10:40.609784 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.678461 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.679154 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.679208 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:40 crc kubenswrapper[5104]: I0130 00:10:40.679226 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:40 crc kubenswrapper[5104]: E0130 00:10:40.679907 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.247933 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.248236 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.249394 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.249464 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.249483 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5104]: E0130 00:10:41.250273 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.250699 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.256995 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.684258 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.686752 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a"} Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.687121 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.687958 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.688010 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5104]: I0130 00:10:41.688034 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5104]: E0130 00:10:41.688609 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.581942 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6d8e1ca78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,LastTimestamp:2026-01-30 00:10:20.437490296 +0000 UTC m=+1.169829555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.587272 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.587914 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.587980 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.588042 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.588128 5104 trace.go:236] Trace[554117970]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:29.892) (total time: 12695ms): Jan 30 00:10:42 crc kubenswrapper[5104]: Trace[554117970]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12695ms (00:10:42.588) Jan 30 00:10:42 crc kubenswrapper[5104]: Trace[554117970]: [12.695953283s] [12.695953283s] END Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.588159 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.588227 5104 trace.go:236] Trace[1479385042]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:30.101) (total time: 12486ms): Jan 30 00:10:42 crc kubenswrapper[5104]: Trace[1479385042]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 12486ms (00:10:42.588) Jan 30 00:10:42 crc kubenswrapper[5104]: Trace[1479385042]: [12.486354879s] [12.486354879s] END Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.588265 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.591824 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.594319 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.594649 5104 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.594845 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.595949 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6e31ba4ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.609053934 +0000 UTC m=+1.341393153,LastTimestamp:2026-01-30 00:10:20.609053934 +0000 UTC m=+1.341393153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.597723 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.626615277 +0000 UTC m=+1.358954506,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.601678 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.626639757 +0000 UTC m=+1.358978976,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.608911 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.626655478 +0000 UTC m=+1.358994697,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.618705 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.628053356 +0000 UTC m=+1.360392575,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.627205 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.628083337 +0000 UTC m=+1.360422546,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.634260 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.628123678 +0000 UTC m=+1.360462897,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.640821 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.629541315 +0000 UTC m=+1.361880534,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.649824 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.629565136 +0000 UTC m=+1.361904355,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.658676 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.629577106 +0000 UTC m=+1.361916325,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.678715 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.630700337 +0000 UTC m=+1.363039546,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.686936 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.630732218 +0000 UTC m=+1.363071437,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.689756 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.690263 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.690894 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.690980 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5104]: I0130 00:10:42.691003 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.692000 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.696472 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.630772919 +0000 UTC m=+1.363112138,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.708482 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.632971869 +0000 UTC m=+1.365311088,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.716094 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.632998199 +0000 UTC m=+1.365337418,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.721558 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.632989259 +0000 UTC m=+1.365328498,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.729072 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.6330166 +0000 UTC m=+1.365355819,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.735637 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.633056661 +0000 UTC m=+1.365395890,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.740752 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ffd3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ffd3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525452603 +0000 UTC m=+1.257791832,LastTimestamp:2026-01-30 00:10:20.633089512 +0000 UTC m=+1.365428741,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.746741 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1c3e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1c3e4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525207117 +0000 UTC m=+1.257546356,LastTimestamp:2026-01-30 00:10:20.633711148 +0000 UTC m=+1.366050367,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.750618 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b6de1ca186\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b6de1ca186 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:20.525232518 +0000 UTC m=+1.257571747,LastTimestamp:2026-01-30 00:10:20.633726519 +0000 UTC m=+1.366065738,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.756514 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b6fe228895 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.062490261 +0000 UTC m=+1.794829520,LastTimestamp:2026-01-30 00:10:21.062490261 +0000 UTC m=+1.794829520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.761373 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b6fe24b2db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.062632155 +0000 UTC m=+1.794971384,LastTimestamp:2026-01-30 00:10:21.062632155 +0000 UTC m=+1.794971384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.766274 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b6fee8b03d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.075476541 +0000 UTC m=+1.807815770,LastTimestamp:2026-01-30 00:10:21.075476541 +0000 UTC m=+1.807815770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.770052 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b6feeb4dd4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.075647956 +0000 UTC m=+1.807987195,LastTimestamp:2026-01-30 00:10:21.075647956 +0000 UTC m=+1.807987195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.774234 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b6ff99a990 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.087074704 +0000 UTC m=+1.819413923,LastTimestamp:2026-01-30 00:10:21.087074704 +0000 UTC m=+1.819413923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.778516 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7277c7249 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.756248649 +0000 UTC m=+2.488587868,LastTimestamp:2026-01-30 00:10:21.756248649 +0000 UTC m=+2.488587868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.782105 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b7278773b4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.756969908 +0000 UTC m=+2.489309127,LastTimestamp:2026-01-30 00:10:21.756969908 +0000 UTC m=+2.489309127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.786233 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b7278ae1e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.757194724 +0000 UTC m=+2.489533943,LastTimestamp:2026-01-30 00:10:21.757194724 +0000 UTC m=+2.489533943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.789678 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7279bb0db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.758296283 +0000 UTC m=+2.490635502,LastTimestamp:2026-01-30 00:10:21.758296283 +0000 UTC m=+2.490635502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.795291 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b72830ae01 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.768060417 +0000 UTC m=+2.500399646,LastTimestamp:2026-01-30 00:10:21.768060417 +0000 UTC m=+2.500399646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.800895 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b728657b65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.771520869 +0000 UTC m=+2.503860088,LastTimestamp:2026-01-30 00:10:21.771520869 +0000 UTC m=+2.503860088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.803442 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b728672422 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.771629602 +0000 UTC m=+2.503968821,LastTimestamp:2026-01-30 00:10:21.771629602 +0000 UTC m=+2.503968821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.806397 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7287eed99 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.773188505 +0000 UTC m=+2.505527764,LastTimestamp:2026-01-30 00:10:21.773188505 +0000 UTC m=+2.505527764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.808681 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b72883881b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.773490203 +0000 UTC m=+2.505829422,LastTimestamp:2026-01-30 00:10:21.773490203 +0000 UTC m=+2.505829422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.811081 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b72886baa4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.773699748 +0000 UTC m=+2.506038967,LastTimestamp:2026-01-30 00:10:21.773699748 +0000 UTC m=+2.506038967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.812404 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b729623199 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:21.788082585 +0000 UTC m=+2.520421804,LastTimestamp:2026-01-30 00:10:21.788082585 +0000 UTC m=+2.520421804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.814631 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b73852d75b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.038734683 +0000 UTC m=+2.771073932,LastTimestamp:2026-01-30 00:10:22.038734683 +0000 UTC m=+2.771073932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.817472 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b739114b8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.051216269 +0000 UTC m=+2.783555488,LastTimestamp:2026-01-30 00:10:22.051216269 +0000 UTC m=+2.783555488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.818697 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b739282bb9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.052715449 +0000 UTC m=+2.785054668,LastTimestamp:2026-01-30 00:10:22.052715449 +0000 UTC m=+2.785054668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.821719 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7539fb796 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.496757654 +0000 UTC m=+3.229096903,LastTimestamp:2026-01-30 00:10:22.496757654 +0000 UTC m=+3.229096903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.823138 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b754668bab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.509788075 +0000 UTC m=+3.242127294,LastTimestamp:2026-01-30 00:10:22.509788075 +0000 UTC m=+3.242127294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.825511 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b75479b27e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.511043198 +0000 UTC m=+3.243382427,LastTimestamp:2026-01-30 00:10:22.511043198 +0000 UTC m=+3.243382427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.827480 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b756c2e6ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.549395181 +0000 UTC m=+3.281734440,LastTimestamp:2026-01-30 00:10:22.549395181 +0000 UTC m=+3.281734440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.830208 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b756e9f8ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.55195563 +0000 UTC m=+3.284294889,LastTimestamp:2026-01-30 00:10:22.55195563 +0000 UTC m=+3.284294889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.834322 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b7573a284e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.557210702 +0000 UTC m=+3.289549931,LastTimestamp:2026-01-30 00:10:22.557210702 +0000 UTC m=+3.289549931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.838727 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b75754bc00 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.558952448 +0000 UTC m=+3.291291687,LastTimestamp:2026-01-30 00:10:22.558952448 +0000 UTC m=+3.291291687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.843236 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b763832590 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.76332072 +0000 UTC m=+3.495659939,LastTimestamp:2026-01-30 00:10:22.76332072 +0000 UTC m=+3.495659939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.847256 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7648cf09b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.780739739 +0000 UTC m=+3.513078958,LastTimestamp:2026-01-30 00:10:22.780739739 +0000 UTC m=+3.513078958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.851529 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b76765871a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.828488474 +0000 UTC m=+3.560827693,LastTimestamp:2026-01-30 00:10:22.828488474 +0000 UTC m=+3.560827693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.857012 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b76772265d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.829315677 +0000 UTC m=+3.561654896,LastTimestamp:2026-01-30 00:10:22.829315677 +0000 UTC m=+3.561654896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.862524 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b7677412f5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.829441781 +0000 UTC m=+3.561781010,LastTimestamp:2026-01-30 00:10:22.829441781 +0000 UTC m=+3.561781010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.866791 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b767792c75 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.829775989 +0000 UTC m=+3.562115208,LastTimestamp:2026-01-30 00:10:22.829775989 +0000 UTC m=+3.562115208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.871527 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b7680ce0d8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.83945596 +0000 UTC m=+3.571795179,LastTimestamp:2026-01-30 00:10:22.83945596 +0000 UTC m=+3.571795179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.876282 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b7685d8d8c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.844743052 +0000 UTC m=+3.577082271,LastTimestamp:2026-01-30 00:10:22.844743052 +0000 UTC m=+3.577082271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.880941 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b76873872b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.846183211 +0000 UTC m=+3.578522430,LastTimestamp:2026-01-30 00:10:22.846183211 +0000 UTC m=+3.578522430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.885539 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b768e93a90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.853896848 +0000 UTC m=+3.586236067,LastTimestamp:2026-01-30 00:10:22.853896848 +0000 UTC m=+3.586236067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.890770 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b768f7338a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.854812554 +0000 UTC m=+3.587151773,LastTimestamp:2026-01-30 00:10:22.854812554 +0000 UTC m=+3.587151773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.895757 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7693e7ce5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:22.859484389 +0000 UTC m=+3.591823608,LastTimestamp:2026-01-30 00:10:22.859484389 +0000 UTC m=+3.591823608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.900770 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b77916e80b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.125325835 +0000 UTC m=+3.857665064,LastTimestamp:2026-01-30 00:10:23.125325835 +0000 UTC m=+3.857665064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.907980 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7793b98cc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.12773038 +0000 UTC m=+3.860069619,LastTimestamp:2026-01-30 00:10:23.12773038 +0000 UTC m=+3.860069619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.913802 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b77c7ecb16 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.182465814 +0000 UTC m=+3.914805023,LastTimestamp:2026-01-30 00:10:23.182465814 +0000 UTC m=+3.914805023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.919985 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b77c957154 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.183950164 +0000 UTC m=+3.916289393,LastTimestamp:2026-01-30 00:10:23.183950164 +0000 UTC m=+3.916289393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.925887 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b77d085db1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.191481777 +0000 UTC m=+3.923820996,LastTimestamp:2026-01-30 00:10:23.191481777 +0000 UTC m=+3.923820996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.931209 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b77d6139c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.197305284 +0000 UTC m=+3.929644513,LastTimestamp:2026-01-30 00:10:23.197305284 +0000 UTC m=+3.929644513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.937632 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7949e9be6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.58720407 +0000 UTC m=+4.319543319,LastTimestamp:2026-01-30 00:10:23.58720407 +0000 UTC m=+4.319543319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.943652 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b799e4c456 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.675688022 +0000 UTC m=+4.408027251,LastTimestamp:2026-01-30 00:10:23.675688022 +0000 UTC m=+4.408027251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.949000 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b79c0919a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.71162359 +0000 UTC m=+4.443962809,LastTimestamp:2026-01-30 00:10:23.71162359 +0000 UTC m=+4.443962809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.956035 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b7a2150ad1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.813069521 +0000 UTC m=+4.545408740,LastTimestamp:2026-01-30 00:10:23.813069521 +0000 UTC m=+4.545408740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.961649 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7a677ea86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.886658182 +0000 UTC m=+4.618997431,LastTimestamp:2026-01-30 00:10:23.886658182 +0000 UTC m=+4.618997431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.966995 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7a68ad37a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.887897466 +0000 UTC m=+4.620236675,LastTimestamp:2026-01-30 00:10:23.887897466 +0000 UTC m=+4.620236675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.974168 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7a9266c1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:23.931649053 +0000 UTC m=+4.663988272,LastTimestamp:2026-01-30 00:10:23.931649053 +0000 UTC m=+4.663988272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.980254 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7aec4bb4b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.025910091 +0000 UTC m=+4.758249310,LastTimestamp:2026-01-30 00:10:24.025910091 +0000 UTC m=+4.758249310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.986793 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b3337574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.100275572 +0000 UTC m=+4.832614801,LastTimestamp:2026-01-30 00:10:24.100275572 +0000 UTC m=+4.832614801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.994064 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b765d765 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.170686309 +0000 UTC m=+4.903025538,LastTimestamp:2026-01-30 00:10:24.170686309 +0000 UTC m=+4.903025538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5104]: E0130 00:10:42.998013 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b790b084 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.173494404 +0000 UTC m=+4.905833633,LastTimestamp:2026-01-30 00:10:24.173494404 +0000 UTC m=+4.905833633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.004048 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7c8db9566 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.463615334 +0000 UTC m=+5.195954553,LastTimestamp:2026-01-30 00:10:24.463615334 +0000 UTC m=+5.195954553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.010258 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7cc506106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.521601286 +0000 UTC m=+5.253940545,LastTimestamp:2026-01-30 00:10:24.521601286 +0000 UTC m=+5.253940545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.013784 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7d0e0aed8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.598167256 +0000 UTC m=+5.330506485,LastTimestamp:2026-01-30 00:10:24.598167256 +0000 UTC m=+5.330506485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.016287 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7dfcf4125 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.848683301 +0000 UTC m=+5.581022530,LastTimestamp:2026-01-30 00:10:24.848683301 +0000 UTC m=+5.581022530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.018248 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7e152d4e3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.874083555 +0000 UTC m=+5.606422774,LastTimestamp:2026-01-30 00:10:24.874083555 +0000 UTC m=+5.606422774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.023244 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7e164bc04 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.875256836 +0000 UTC m=+5.607596055,LastTimestamp:2026-01-30 00:10:24.875256836 +0000 UTC m=+5.607596055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.024382 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7fad6e6e7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.302169319 +0000 UTC m=+6.034508538,LastTimestamp:2026-01-30 00:10:25.302169319 +0000 UTC m=+6.034508538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.029442 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7fcb6763c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.333597756 +0000 UTC m=+6.065936975,LastTimestamp:2026-01-30 00:10:25.333597756 +0000 UTC m=+6.065936975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.034304 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b7fcc49554 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.33452322 +0000 UTC m=+6.066862449,LastTimestamp:2026-01-30 00:10:25.33452322 +0000 UTC m=+6.066862449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.039198 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b811d9173b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.688188731 +0000 UTC m=+6.420527950,LastTimestamp:2026-01-30 00:10:25.688188731 +0000 UTC m=+6.420527950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.043977 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b816b5d74f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.769764687 +0000 UTC m=+6.502103906,LastTimestamp:2026-01-30 00:10:25.769764687 +0000 UTC m=+6.502103906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.048415 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b816c77b41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:25.770920769 +0000 UTC m=+6.503259988,LastTimestamp:2026-01-30 00:10:25.770920769 +0000 UTC m=+6.503259988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.053114 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:43 crc kubenswrapper[5104]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b832cfa1fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": dial tcp 192.168.126.11:6443: connect: connection refused Jan 30 00:10:43 crc kubenswrapper[5104]: body: Jan 30 00:10:43 crc kubenswrapper[5104]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.24121702 +0000 UTC m=+6.973556239,LastTimestamp:2026-01-30 00:10:26.24121702 +0000 UTC m=+6.973556239,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:43 crc kubenswrapper[5104]: > Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.057682 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b832d0b990 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.241288592 +0000 UTC m=+6.973627821,LastTimestamp:2026-01-30 00:10:26.241288592 +0000 UTC m=+6.973627821,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.063312 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b85b72b684 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.922993284 +0000 UTC m=+7.655332543,LastTimestamp:2026-01-30 00:10:26.922993284 +0000 UTC m=+7.655332543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.084580 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b8632f37b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:27.052787638 +0000 UTC m=+7.785126877,LastTimestamp:2026-01-30 00:10:27.052787638 +0000 UTC m=+7.785126877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.093193 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b8634ae22b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:27.054600747 +0000 UTC m=+7.786939986,LastTimestamp:2026-01-30 00:10:27.054600747 +0000 UTC m=+7.786939986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.099205 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b87e7b8ede openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:27.510775518 +0000 UTC m=+8.243114737,LastTimestamp:2026-01-30 00:10:27.510775518 +0000 UTC m=+8.243114737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.114216 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b8834e84dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:27.591709917 +0000 UTC m=+8.324049146,LastTimestamp:2026-01-30 00:10:27.591709917 +0000 UTC m=+8.324049146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.120978 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7b790b084\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b790b084 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.173494404 +0000 UTC m=+4.905833633,LastTimestamp:2026-01-30 00:10:27.621838228 +0000 UTC m=+8.354177437,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.126045 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7c8db9566\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7c8db9566 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.463615334 +0000 UTC m=+5.195954553,LastTimestamp:2026-01-30 00:10:27.859434774 +0000 UTC m=+8.591774033,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.131978 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7cc506106\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7cc506106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.521601286 +0000 UTC m=+5.253940545,LastTimestamp:2026-01-30 00:10:27.913182661 +0000 UTC m=+8.645521920,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.136964 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:10:43 crc kubenswrapper[5104]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59b8e730ee8b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 00:10:43 crc kubenswrapper[5104]: body: Jan 30 00:10:43 crc kubenswrapper[5104]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.267492491 +0000 UTC m=+9.999831740,LastTimestamp:2026-01-30 00:10:29.267492491 +0000 UTC m=+9.999831740,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:43 crc kubenswrapper[5104]: > Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.140115 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b8e731f78d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.267560333 +0000 UTC m=+9.999899582,LastTimestamp:2026-01-30 00:10:29.267560333 +0000 UTC m=+9.999899582,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.147093 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.151276 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:30.656497275 +0000 UTC m=+11.388836494,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.157951 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:34.80433146 +0000 UTC m=+15.536670699,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.163340 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:43 crc kubenswrapper[5104]: &Event{ObjectMeta:{kube-apiserver-crc.188f59bad65629fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:43 crc kubenswrapper[5104]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:43 crc kubenswrapper[5104]: Jan 30 00:10:43 crc kubenswrapper[5104]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:37.57465446 +0000 UTC m=+18.306993719,LastTimestamp:2026-01-30 00:10:37.57465446 +0000 UTC m=+18.306993719,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:43 crc kubenswrapper[5104]: > Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.168010 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bad6578737 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:37.574743863 +0000 UTC m=+18.307083112,LastTimestamp:2026-01-30 00:10:37.574743863 +0000 UTC m=+18.307083112,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.172688 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bad65629fc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:43 crc kubenswrapper[5104]: &Event{ObjectMeta:{kube-apiserver-crc.188f59bad65629fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:43 crc kubenswrapper[5104]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:43 crc kubenswrapper[5104]: Jan 30 00:10:43 crc kubenswrapper[5104]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:37.57465446 +0000 UTC m=+18.306993719,LastTimestamp:2026-01-30 00:10:37.581531696 +0000 UTC m=+18.313870935,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:43 crc kubenswrapper[5104]: > Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.177462 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bad6578737\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bad6578737 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:37.574743863 +0000 UTC m=+18.307083112,LastTimestamp:2026-01-30 00:10:37.581598068 +0000 UTC m=+18.313937307,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.182729 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:38.63809191 +0000 UTC m=+19.370431139,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.190375 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:10:43 crc kubenswrapper[5104]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59bb3b49aa2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 30 00:10:43 crc kubenswrapper[5104]: body: Jan 30 00:10:43 crc kubenswrapper[5104]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.268334127 +0000 UTC m=+20.000673386,LastTimestamp:2026-01-30 00:10:39.268334127 +0000 UTC m=+20.000673386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:43 crc kubenswrapper[5104]: > Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.198530 5104 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb3b4b3cf8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.26843724 +0000 UTC m=+20.000776489,LastTimestamp:2026-01-30 00:10:39.26843724 +0000 UTC m=+20.000776489,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.203017 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7b790b084\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b790b084 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.173494404 +0000 UTC m=+4.905833633,LastTimestamp:2026-01-30 00:10:41.252779221 +0000 UTC m=+21.985118480,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.211009 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7c8db9566\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7c8db9566 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.463615334 +0000 UTC m=+5.195954553,LastTimestamp:2026-01-30 00:10:41.47598329 +0000 UTC m=+22.208322509,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.215725 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7cc506106\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7cc506106 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.521601286 +0000 UTC m=+5.253940545,LastTimestamp:2026-01-30 00:10:41.487216272 +0000 UTC m=+22.219555481,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.418988 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.556356 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.693247 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.694385 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.695954 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" exitCode=255 Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696025 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a"} Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696062 5104 scope.go:117] "RemoveContainer" containerID="2a072bf5b41d37cece1cb5d227d98dd9a2c710c0ecc64680ec29fe29449c5926" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696306 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696823 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696868 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.696881 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.697186 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5104]: I0130 00:10:43.697430 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.697624 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:43 crc kubenswrapper[5104]: E0130 00:10:43.711674 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:43.697601599 +0000 UTC m=+24.429940808,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.417960 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.699935 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.702056 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.702624 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.702674 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.702688 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:44 crc kubenswrapper[5104]: E0130 00:10:44.703120 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.703416 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:10:44 crc kubenswrapper[5104]: E0130 00:10:44.703682 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:44 crc kubenswrapper[5104]: E0130 00:10:44.713492 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:44.703646403 +0000 UTC m=+25.435985632,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:44 crc kubenswrapper[5104]: I0130 00:10:44.801068 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.420559 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.703993 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.704669 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.704728 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.704747 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:45 crc kubenswrapper[5104]: E0130 00:10:45.705385 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:45 crc kubenswrapper[5104]: I0130 00:10:45.705893 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:10:45 crc kubenswrapper[5104]: E0130 00:10:45.706253 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:45 crc kubenswrapper[5104]: E0130 00:10:45.712607 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:45.706195974 +0000 UTC m=+26.438535223,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.274000 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.274257 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.275545 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.275582 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.275599 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:46 crc kubenswrapper[5104]: E0130 00:10:46.276036 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.280266 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.421202 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.707132 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.708138 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.708176 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:46 crc kubenswrapper[5104]: I0130 00:10:46.708189 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:46 crc kubenswrapper[5104]: E0130 00:10:46.713478 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:47 crc kubenswrapper[5104]: I0130 00:10:47.421756 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:48 crc kubenswrapper[5104]: I0130 00:10:48.417877 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:49 crc kubenswrapper[5104]: E0130 00:10:49.366601 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.420670 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.595358 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:49 crc kubenswrapper[5104]: E0130 00:10:49.595427 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.596327 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.596381 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.596393 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:49 crc kubenswrapper[5104]: I0130 00:10:49.596423 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:49 crc kubenswrapper[5104]: E0130 00:10:49.607774 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:50 crc kubenswrapper[5104]: I0130 00:10:50.418532 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:50 crc kubenswrapper[5104]: E0130 00:10:50.610902 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:50 crc kubenswrapper[5104]: E0130 00:10:50.708309 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:51 crc kubenswrapper[5104]: I0130 00:10:51.420944 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:52 crc kubenswrapper[5104]: I0130 00:10:52.418844 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:53 crc kubenswrapper[5104]: I0130 00:10:53.419365 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:54 crc kubenswrapper[5104]: I0130 00:10:54.421801 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:55 crc kubenswrapper[5104]: E0130 00:10:55.421259 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:55 crc kubenswrapper[5104]: I0130 00:10:55.421407 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.420076 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:56 crc kubenswrapper[5104]: E0130 00:10:56.601164 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.608396 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.609440 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.609628 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.609773 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:56 crc kubenswrapper[5104]: I0130 00:10:56.609960 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:56 crc kubenswrapper[5104]: E0130 00:10:56.619711 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:57 crc kubenswrapper[5104]: I0130 00:10:57.417995 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.419384 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.525710 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.526841 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.527080 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.527220 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5104]: E0130 00:10:58.527933 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:58 crc kubenswrapper[5104]: I0130 00:10:58.528408 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:10:58 crc kubenswrapper[5104]: E0130 00:10:58.528934 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:58 crc kubenswrapper[5104]: E0130 00:10:58.536830 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:10:58.528880469 +0000 UTC m=+39.261219728,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:58 crc kubenswrapper[5104]: E0130 00:10:58.756771 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:59 crc kubenswrapper[5104]: I0130 00:10:59.422023 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5104]: I0130 00:11:00.421466 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5104]: E0130 00:11:00.612444 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:01 crc kubenswrapper[5104]: I0130 00:11:01.420714 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:02 crc kubenswrapper[5104]: I0130 00:11:02.418821 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.420565 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:03 crc kubenswrapper[5104]: E0130 00:11:03.611841 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.619965 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.621233 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.621304 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.621334 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:03 crc kubenswrapper[5104]: I0130 00:11:03.621381 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:03 crc kubenswrapper[5104]: E0130 00:11:03.636056 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:04 crc kubenswrapper[5104]: I0130 00:11:04.419781 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:04 crc kubenswrapper[5104]: E0130 00:11:04.764943 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:11:05 crc kubenswrapper[5104]: I0130 00:11:05.421356 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5104]: E0130 00:11:05.880314 5104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:11:06 crc kubenswrapper[5104]: I0130 00:11:06.420491 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5104]: I0130 00:11:07.420871 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.420220 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.459877 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.460157 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.461244 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.461294 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:08 crc kubenswrapper[5104]: I0130 00:11:08.461307 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:08 crc kubenswrapper[5104]: E0130 00:11:08.461706 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.421652 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.524966 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.526079 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.526106 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.526117 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:09 crc kubenswrapper[5104]: E0130 00:11:09.526479 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:09 crc kubenswrapper[5104]: I0130 00:11:09.526774 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:11:09 crc kubenswrapper[5104]: E0130 00:11:09.534715 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b7b790b084\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b7b790b084 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.173494404 +0000 UTC m=+4.905833633,LastTimestamp:2026-01-30 00:11:09.52836813 +0000 UTC m=+50.260707389,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.414729 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5104]: E0130 00:11:10.612994 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:10 crc kubenswrapper[5104]: E0130 00:11:10.618563 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.636859 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.638121 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.638193 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.638214 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.638254 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:10 crc kubenswrapper[5104]: E0130 00:11:10.647835 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.787337 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.789754 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2"} Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.789972 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.790480 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.790512 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:10 crc kubenswrapper[5104]: I0130 00:11:10.790522 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:10 crc kubenswrapper[5104]: E0130 00:11:10.790816 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.422080 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.793841 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.794449 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.796154 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" exitCode=255 Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.796242 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2"} Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.796366 5104 scope.go:117] "RemoveContainer" containerID="0b67f5e058def325e02fa3b6dc481244946e645830e28a7e4d8c1ccdd52d047a" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.796528 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.797225 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.797290 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.797314 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:11 crc kubenswrapper[5104]: E0130 00:11:11.797899 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:11 crc kubenswrapper[5104]: I0130 00:11:11.798311 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:11 crc kubenswrapper[5104]: E0130 00:11:11.798625 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:11 crc kubenswrapper[5104]: E0130 00:11:11.810521 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:11:11.798575737 +0000 UTC m=+52.530914986,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:12 crc kubenswrapper[5104]: I0130 00:11:12.419327 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:12 crc kubenswrapper[5104]: I0130 00:11:12.800378 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:13 crc kubenswrapper[5104]: I0130 00:11:13.419518 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.419514 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.801768 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.802224 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.803387 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.803472 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.803487 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:14 crc kubenswrapper[5104]: E0130 00:11:14.804009 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:14 crc kubenswrapper[5104]: I0130 00:11:14.804337 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:14 crc kubenswrapper[5104]: E0130 00:11:14.804576 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:14 crc kubenswrapper[5104]: E0130 00:11:14.812555 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:11:14.804536182 +0000 UTC m=+55.536875411,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:15 crc kubenswrapper[5104]: I0130 00:11:15.417998 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:16 crc kubenswrapper[5104]: I0130 00:11:16.416706 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.421483 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:17 crc kubenswrapper[5104]: E0130 00:11:17.624611 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.648346 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.649696 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.649752 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.649772 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:17 crc kubenswrapper[5104]: I0130 00:11:17.649805 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:17 crc kubenswrapper[5104]: E0130 00:11:17.669460 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:18 crc kubenswrapper[5104]: I0130 00:11:18.421313 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:19 crc kubenswrapper[5104]: I0130 00:11:19.420064 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.420796 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:20 crc kubenswrapper[5104]: E0130 00:11:20.613357 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.791107 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.791992 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.793520 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.793724 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.794198 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:20 crc kubenswrapper[5104]: E0130 00:11:20.795330 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:20 crc kubenswrapper[5104]: I0130 00:11:20.796136 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:20 crc kubenswrapper[5104]: E0130 00:11:20.796669 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:20 crc kubenswrapper[5104]: E0130 00:11:20.804154 5104 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b8fdcb31fa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8fdcb31fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:29.64670105 +0000 UTC m=+10.379040309,LastTimestamp:2026-01-30 00:11:20.796618787 +0000 UTC m=+61.528958036,Count:11,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:21 crc kubenswrapper[5104]: I0130 00:11:21.420104 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:22 crc kubenswrapper[5104]: I0130 00:11:22.421496 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:23 crc kubenswrapper[5104]: I0130 00:11:23.420273 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.421562 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:24 crc kubenswrapper[5104]: E0130 00:11:24.632984 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.669645 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.670838 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.670943 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.670970 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:24 crc kubenswrapper[5104]: I0130 00:11:24.671015 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:24 crc kubenswrapper[5104]: E0130 00:11:24.684603 5104 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:25 crc kubenswrapper[5104]: I0130 00:11:25.418342 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:26 crc kubenswrapper[5104]: I0130 00:11:26.420033 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:27 crc kubenswrapper[5104]: I0130 00:11:27.420148 5104 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:28 crc kubenswrapper[5104]: I0130 00:11:28.014210 5104 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-b882c" Jan 30 00:11:28 crc kubenswrapper[5104]: I0130 00:11:28.022129 5104 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-b882c" Jan 30 00:11:28 crc kubenswrapper[5104]: I0130 00:11:28.055394 5104 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 00:11:28 crc kubenswrapper[5104]: I0130 00:11:28.242347 5104 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 00:11:29 crc kubenswrapper[5104]: I0130 00:11:29.023557 5104 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-01 00:06:28 +0000 UTC" deadline="2026-02-23 20:19:25.570643295 +0000 UTC" Jan 30 00:11:29 crc kubenswrapper[5104]: I0130 00:11:29.023605 5104 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="596h7m56.547041658s" Jan 30 00:11:30 crc kubenswrapper[5104]: E0130 00:11:30.614508 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.524947 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.525933 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.526043 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.526115 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.526491 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.685620 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.688529 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.688568 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.688584 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.688697 5104 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.697441 5104 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.697734 5104 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.697765 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.700779 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.700845 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.700874 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.700891 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.700906 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.718746 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.727449 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.727658 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.727801 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.728000 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.728139 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.740759 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.750500 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.750549 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.750563 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.750582 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.750595 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.761281 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.770935 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.770977 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.770988 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.771003 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5104]: I0130 00:11:31.771016 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.784589 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.784758 5104 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.784783 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.885574 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:31 crc kubenswrapper[5104]: E0130 00:11:31.986636 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.087173 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.187339 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.288751 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.389747 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.490232 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.590898 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.691957 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.793056 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.893528 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:32 crc kubenswrapper[5104]: E0130 00:11:32.994304 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.095067 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.195654 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.296583 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.397132 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.497956 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.598140 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.699269 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.799423 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:33 crc kubenswrapper[5104]: E0130 00:11:33.900277 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.001113 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.102040 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.202910 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.304060 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.404899 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.504994 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.605408 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.705778 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: I0130 00:11:34.790162 5104 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.805906 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:34 crc kubenswrapper[5104]: E0130 00:11:34.906072 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.006736 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.107223 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.208416 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.308926 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.409879 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.509997 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: I0130 00:11:35.525399 5104 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:35 crc kubenswrapper[5104]: I0130 00:11:35.526137 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5104]: I0130 00:11:35.526181 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5104]: I0130 00:11:35.526194 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.526721 5104 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:35 crc kubenswrapper[5104]: I0130 00:11:35.527002 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.527247 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.610548 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.711730 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.812415 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:35 crc kubenswrapper[5104]: E0130 00:11:35.913194 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.014112 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.115278 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.215538 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.316296 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.417447 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.518544 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.619569 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.719644 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.820762 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:36 crc kubenswrapper[5104]: E0130 00:11:36.921555 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.022280 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.123189 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: I0130 00:11:37.174162 5104 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.223636 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.323799 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.424164 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.524459 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.625614 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.726632 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.826952 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:37 crc kubenswrapper[5104]: E0130 00:11:37.927814 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.028939 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.130076 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.231186 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.332077 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.432515 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.533352 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.634145 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.734758 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.835419 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:38 crc kubenswrapper[5104]: E0130 00:11:38.936308 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.037232 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.138237 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.238760 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.339788 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.440141 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.540957 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.641629 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.742761 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.843898 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5104]: E0130 00:11:39.944726 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.044971 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.145119 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.245281 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.345579 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.446048 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.546543 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.615303 5104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.647238 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.747631 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.849229 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:40 crc kubenswrapper[5104]: E0130 00:11:40.949746 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.050956 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.151618 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.252172 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.352395 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.452687 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.553228 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.654235 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.755404 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.855788 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:41 crc kubenswrapper[5104]: E0130 00:11:41.956437 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.040124 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.044780 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.044839 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.044890 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.044915 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.044932 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.060085 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.065042 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.065217 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.065356 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.065474 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.065603 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.081074 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.086200 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.086252 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.086273 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.086290 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.086303 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.101112 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.104691 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.104745 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.104758 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.104775 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5104]: I0130 00:11:42.104787 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.119174 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.119455 5104 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.119496 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.220385 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.320781 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.421124 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.521291 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.621710 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.722114 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.822490 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:42 crc kubenswrapper[5104]: E0130 00:11:42.923352 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.023586 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.123780 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.224404 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.324929 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.425811 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: E0130 00:11:43.526667 5104 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.576153 5104 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.629734 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.629793 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.629812 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.629884 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.629911 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.644201 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.661946 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.732486 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.732975 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.733066 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.733181 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.733284 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.773731 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.835576 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.835627 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.835639 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.835655 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.835666 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.860832 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.938216 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.938596 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.938688 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.938778 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.938879 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5104]: I0130 00:11:43.964805 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.041802 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.042513 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.042616 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.042717 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.042802 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.145523 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.145606 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.145630 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.145660 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.145683 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.249394 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.249929 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.250061 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.250180 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.250283 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.354296 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.354377 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.354398 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.354424 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.354442 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.431429 5104 apiserver.go:52] "Watching apiserver" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.440960 5104 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.441831 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qnqx2","openshift-machine-config-operator/machine-config-daemon-jzfxc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-bk79c","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-dns/node-resolver-qpj6b","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-additional-cni-plugins-9mfdf","openshift-multus/network-metrics-daemon-gvjb6","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj","openshift-ovn-kubernetes/ovnkube-node-dr5dp","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv"] Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.450079 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.453127 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.453431 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.453188 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.453685 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.454385 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.455027 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.455091 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.455167 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.455955 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.457606 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.457709 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.457772 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.457841 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.457928 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.458614 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.458962 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.458961 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.459092 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.459429 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.459842 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.474576 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.474674 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.474798 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.477798 5104 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.486249 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.486542 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.486644 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.489531 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.489873 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.489908 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.490576 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.491069 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.491017 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.496986 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.497160 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.498811 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.500042 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.500493 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.502757 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.505452 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.506025 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.509569 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.509776 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.511070 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.514995 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.515202 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.515342 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.516420 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.517076 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.518285 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.519768 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.522083 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.523242 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.525651 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.526060 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.526281 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.526501 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.526750 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.526515 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.527961 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.529305 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.530802 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.530895 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.531923 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.532491 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.533691 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.534300 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.536576 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.536877 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.537204 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.537204 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.542747 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.546259 5104 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.559402 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.561463 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.561519 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.561543 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.561568 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.561588 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576054 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576099 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576129 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576152 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576172 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576195 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576220 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576244 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576266 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576290 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576311 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576332 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576375 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576395 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576417 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576467 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576492 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576516 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576542 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576563 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576583 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576606 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576626 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576648 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576671 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576694 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576718 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576746 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576773 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576804 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576831 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576882 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576924 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576949 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.576976 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577002 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577027 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577050 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577071 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577094 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577118 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577143 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577168 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577253 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577276 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577297 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577320 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577340 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577362 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577385 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577410 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577432 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577453 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577475 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577496 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577517 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577539 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577561 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577585 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577607 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577628 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577654 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577674 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577703 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577732 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577758 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577781 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577811 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577846 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577907 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577936 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577960 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.577981 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578005 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578030 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578055 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578076 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578099 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578128 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578155 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578178 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578201 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578225 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578246 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578268 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578289 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578311 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578335 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578356 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578378 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578404 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578431 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578453 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578480 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578502 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578524 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578546 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578568 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578591 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578613 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578654 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578685 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578708 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578732 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578756 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578779 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578808 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578831 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578882 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578909 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578931 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578954 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.578978 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.579003 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.579417 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.580264 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.580482 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.580827 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.580904 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.580900 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581342 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581501 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581563 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581584 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581634 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581786 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581776 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581886 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581924 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581956 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.581987 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582016 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582001 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582041 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582048 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582066 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582096 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582119 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582144 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582162 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582180 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582198 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582161 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582396 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582222 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582584 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582716 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582789 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582820 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582893 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582956 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582991 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583048 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583078 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583130 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583166 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583218 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583247 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583305 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583356 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583388 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583443 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583475 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583529 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583553 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584233 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584304 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.582999 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583032 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583330 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583431 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583546 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583596 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583786 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585425 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583381 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583835 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.583617 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584065 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584085 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584141 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584184 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584216 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584335 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584547 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584596 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.584879 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585334 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585332 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585748 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585780 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585873 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585911 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585836 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585944 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.585968 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586000 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586035 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586108 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586112 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586212 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586250 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586281 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586313 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586342 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586372 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586401 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586430 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586459 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586489 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586518 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586545 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586571 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586598 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586624 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586653 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586679 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586707 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586732 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586752 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.586844 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587016 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587433 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587449 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587595 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587623 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587652 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587988 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588011 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588104 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590639 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588365 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588492 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588752 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.588986 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.589012 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.589030 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.589207 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.589482 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.589642 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590002 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590068 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590084 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590413 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.590438 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591064 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591232 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591250 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.587711 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591377 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591509 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591559 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591548 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591586 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591596 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591662 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591695 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591728 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591757 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591788 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591816 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591845 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591894 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591922 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591951 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591977 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592009 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592036 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592062 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592089 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592117 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592144 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592171 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592199 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594171 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594218 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594253 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594286 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594316 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594348 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594380 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594409 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594438 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594463 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594497 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594529 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594560 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594589 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594617 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594645 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594675 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594701 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594729 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594757 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594786 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594813 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594865 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594966 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbld6\" (UniqueName: \"kubernetes.io/projected/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-kube-api-access-tbld6\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595320 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-bin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595368 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-kubelet\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595400 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595701 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595830 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cnibin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595876 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-k8s-cni-cncf-io\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595902 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-netns\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595949 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595974 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-system-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596014 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-hostroot\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596045 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-daemon-config\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596074 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16c76ea1-575d-492f-b64a-9116b99a5b28-host\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596102 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jg5\" (UniqueName: \"kubernetes.io/projected/fc38d06d-c458-429d-8dbf-43aab1cd4e57-kube-api-access-85jg5\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596140 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vhr\" (UniqueName: \"kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596170 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596194 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596221 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-multus\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596251 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596283 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596331 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596395 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596426 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbm4b\" (UniqueName: \"kubernetes.io/projected/8549d8ab-08fd-4d10-b03e-d162d745184a-kube-api-access-hbm4b\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596455 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596488 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cni-binary-copy\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596515 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-os-release\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596541 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596568 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596594 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596622 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596647 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596682 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591810 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596712 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-os-release\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596744 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-conf-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596769 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591942 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596796 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596832 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-system-cni-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596879 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cnibin\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596911 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596938 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596969 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597001 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597034 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597061 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-binary-copy\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597091 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/27b37cd2-349b-4e9b-9665-06efa944384c-hosts-file\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597117 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27b37cd2-349b-4e9b-9665-06efa944384c-tmp-dir\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598193 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598529 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598723 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzd7\" (UniqueName: \"kubernetes.io/projected/2f49b5db-a679-4eef-9bf2-8d0275caac12-kube-api-access-tkzd7\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598908 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-etc-kubernetes\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599024 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmsn\" (UniqueName: \"kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599143 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f49b5db-a679-4eef-9bf2-8d0275caac12-proxy-tls\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599251 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599694 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599822 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599991 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600159 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f49b5db-a679-4eef-9bf2-8d0275caac12-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600247 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-socket-dir-parent\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600335 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592317 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592375 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.591972 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592790 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592815 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592820 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600737 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600769 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600807 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.592937 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593145 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593239 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593603 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601150 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593662 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593680 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593717 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.593712 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594032 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594069 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594339 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594467 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594585 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594763 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594714 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594806 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594817 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.594895 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595462 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595644 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.595743 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596000 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596265 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596514 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596806 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.596973 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597167 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597195 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597256 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597494 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597685 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.597823 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598048 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598118 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598325 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598509 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598843 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.598988 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599020 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599234 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599240 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599342 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.599932 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600089 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600280 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600412 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.600574 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601109 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601260 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601434 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601654 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.601661 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602319 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602381 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfj8r\" (UniqueName: \"kubernetes.io/projected/16c76ea1-575d-492f-b64a-9116b99a5b28-kube-api-access-vfj8r\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602420 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602427 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602559 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602755 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.602975 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wnv\" (UniqueName: \"kubernetes.io/projected/27b37cd2-349b-4e9b-9665-06efa944384c-kube-api-access-57wnv\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.603037 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.603134 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f49b5db-a679-4eef-9bf2-8d0275caac12-rootfs\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.603361 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.603430 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.603783 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.603973 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604117 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-multus-certs\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604175 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16c76ea1-575d-492f-b64a-9116b99a5b28-serviceca\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604236 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604282 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604335 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604420 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604559 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.604619 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.605321 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.605419 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.105399799 +0000 UTC m=+85.837739018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.605901 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.605968 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606153 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606250 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606302 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606345 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606426 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.606707 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.106687723 +0000 UTC m=+85.839026952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606775 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.606830 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.607036 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.607058 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.610617 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.110584119 +0000 UTC m=+85.842923348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.610778 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.611039 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.611503 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.611984 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.613371 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.613406 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.614368 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.614737 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.616235 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.616267 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.616286 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.616415 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.116394316 +0000 UTC m=+85.848733545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.616690 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.616729 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617023 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617196 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617551 5104 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617580 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617598 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617613 5104 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617627 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617641 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617655 5104 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617672 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617687 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617701 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617714 5104 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617727 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617845 5104 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617883 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617898 5104 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617912 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617930 5104 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617945 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617960 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617973 5104 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617947 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.617987 5104 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618064 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618140 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618157 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618169 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618180 5104 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618194 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618204 5104 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618216 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618229 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618241 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618254 5104 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618264 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618279 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618291 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618302 5104 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618311 5104 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618322 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618331 5104 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618342 5104 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618352 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618363 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618407 5104 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618422 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618432 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618445 5104 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618455 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618467 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618478 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618514 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618525 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618535 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618545 5104 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618579 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618590 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618599 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618609 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619068 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619083 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619098 5104 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619108 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619118 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619131 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619145 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619156 5104 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619167 5104 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619179 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619191 5104 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619201 5104 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619211 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619220 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619231 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619240 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619252 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619261 5104 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619274 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619284 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619256 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619293 5104 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619384 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619400 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619411 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619422 5104 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.618728 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619434 5104 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619322 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619448 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.619471 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.620028 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.620158 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.620953 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.621539 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.622050 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.622983 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.623053 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.623078 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.623094 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.623372 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.123348333 +0000 UTC m=+85.855687562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.623733 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.623799 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.624066 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.624078 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.624247 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.624563 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.625474 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.625940 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.625997 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.626464 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627062 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627106 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627467 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627521 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627624 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627662 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.627660 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628013 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628062 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628092 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628104 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628182 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628253 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.628358 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.629239 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.629825 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.630242 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.633100 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.633764 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.635084 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.637425 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638026 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638280 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638481 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638509 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638492 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.638543 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.639756 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.640231 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.641174 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.641801 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.642211 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.642290 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.642931 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.644530 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.646342 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648530 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648595 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648668 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648673 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648799 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.648300 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.650186 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.655544 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.657878 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664541 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664582 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664592 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664607 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664619 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.664973 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.665824 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.674957 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.677246 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.686299 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.691488 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.699841 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.711036 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.717793 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720075 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720104 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720121 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720136 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720139 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720256 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-binary-copy\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720313 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/27b37cd2-349b-4e9b-9665-06efa944384c-hosts-file\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720333 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27b37cd2-349b-4e9b-9665-06efa944384c-tmp-dir\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720350 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720365 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720385 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkzd7\" (UniqueName: \"kubernetes.io/projected/2f49b5db-a679-4eef-9bf2-8d0275caac12-kube-api-access-tkzd7\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720424 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-etc-kubernetes\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720440 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmsn\" (UniqueName: \"kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720457 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f49b5db-a679-4eef-9bf2-8d0275caac12-proxy-tls\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720493 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720510 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720526 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f49b5db-a679-4eef-9bf2-8d0275caac12-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720541 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-socket-dir-parent\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720560 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720578 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720594 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfj8r\" (UniqueName: \"kubernetes.io/projected/16c76ea1-575d-492f-b64a-9116b99a5b28-kube-api-access-vfj8r\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720611 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720627 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57wnv\" (UniqueName: \"kubernetes.io/projected/27b37cd2-349b-4e9b-9665-06efa944384c-kube-api-access-57wnv\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720643 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720659 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f49b5db-a679-4eef-9bf2-8d0275caac12-rootfs\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720674 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-multus-certs\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720730 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f49b5db-a679-4eef-9bf2-8d0275caac12-rootfs\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720770 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.720999 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721031 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-etc-kubernetes\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721079 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-multus-certs\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721080 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/27b37cd2-349b-4e9b-9665-06efa944384c-hosts-file\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721148 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16c76ea1-575d-492f-b64a-9116b99a5b28-serviceca\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721211 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721250 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721286 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721350 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721405 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721426 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tbld6\" (UniqueName: \"kubernetes.io/projected/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-kube-api-access-tbld6\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721438 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721470 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-bin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721503 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/27b37cd2-349b-4e9b-9665-06efa944384c-tmp-dir\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721507 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-kubelet\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721542 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721548 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721599 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cnibin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721632 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-k8s-cni-cncf-io\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721663 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-netns\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721678 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-binary-copy\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721705 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721744 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-system-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721760 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-bin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721774 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-hostroot\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721789 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721808 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721813 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-daemon-config\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721837 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721882 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16c76ea1-575d-492f-b64a-9116b99a5b28-host\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721929 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85jg5\" (UniqueName: \"kubernetes.io/projected/fc38d06d-c458-429d-8dbf-43aab1cd4e57-kube-api-access-85jg5\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.721974 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vhr\" (UniqueName: \"kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722029 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722061 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-multus\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722112 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722150 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbm4b\" (UniqueName: \"kubernetes.io/projected/8549d8ab-08fd-4d10-b03e-d162d745184a-kube-api-access-hbm4b\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722201 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cni-binary-copy\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722235 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-os-release\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722266 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722288 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-socket-dir-parent\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722319 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722373 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cnibin\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722455 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16c76ea1-575d-492f-b64a-9116b99a5b28-host\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722501 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722505 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722517 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f49b5db-a679-4eef-9bf2-8d0275caac12-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722537 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722549 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722567 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-kubelet\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722587 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-hostroot\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722601 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-netns\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722641 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-run-k8s-cni-cncf-io\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722674 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.722739 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-system-cni-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723039 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-host-var-lib-cni-multus\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723063 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723277 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-os-release\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723753 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-daemon-config\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723840 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.722927 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.724271 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-cni-binary-copy\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.724279 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:11:45.22401517 +0000 UTC m=+85.956354389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.724473 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.725736 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16c76ea1-575d-492f-b64a-9116b99a5b28-serviceca\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.727078 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.732131 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f49b5db-a679-4eef-9bf2-8d0275caac12-proxy-tls\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734156 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734529 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.723044 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734871 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734909 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734931 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.734984 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-os-release\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735008 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-conf-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735036 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735060 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735088 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-system-cni-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735114 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cnibin\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735138 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735278 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735293 5104 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735306 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735320 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735332 5104 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735344 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735354 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735365 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735378 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735389 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735401 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735413 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735426 5104 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735437 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735449 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735459 5104 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735469 5104 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735479 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735492 5104 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735503 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735516 5104 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735526 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735538 5104 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735549 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735562 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735574 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735586 5104 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735598 5104 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735609 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735620 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735632 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735646 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735659 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735681 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736266 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736448 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736490 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736543 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736578 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-system-cni-dir\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736580 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wnv\" (UniqueName: \"kubernetes.io/projected/27b37cd2-349b-4e9b-9665-06efa944384c-kube-api-access-57wnv\") pod \"node-resolver-qpj6b\" (UID: \"27b37cd2-349b-4e9b-9665-06efa944384c\") " pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736622 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fc38d06d-c458-429d-8dbf-43aab1cd4e57-cnibin\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736597 5104 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.735610 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fc38d06d-c458-429d-8dbf-43aab1cd4e57-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736658 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736735 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736737 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-os-release\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736775 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-multus-conf-dir\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736950 5104 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736967 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736979 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.736990 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737002 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737039 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737070 5104 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737082 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737094 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737105 5104 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737118 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737131 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737187 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737201 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737213 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737227 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737237 5104 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737262 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737274 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737285 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737297 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737309 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737320 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737332 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737341 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737352 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737362 5104 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737377 5104 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737388 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737399 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737412 5104 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737423 5104 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737433 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737448 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737460 5104 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737471 5104 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737483 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737495 5104 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737506 5104 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737517 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737529 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737541 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737553 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737565 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737578 5104 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737590 5104 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737601 5104 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737614 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737626 5104 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737638 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737650 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737661 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737672 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737683 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737694 5104 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737706 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737718 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737729 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737740 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737753 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737764 5104 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737777 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737790 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737802 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737813 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737825 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737836 5104 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737869 5104 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737882 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737893 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737906 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737917 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737929 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737944 5104 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737984 5104 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.737997 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738031 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738065 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738078 5104 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738089 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738101 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738114 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738125 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738137 5104 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738148 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738159 5104 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738170 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738181 5104 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738191 5104 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738202 5104 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738214 5104 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738225 5104 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738237 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738249 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738262 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738272 5104 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738284 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738295 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738307 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.738319 5104 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.739025 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkzd7\" (UniqueName: \"kubernetes.io/projected/2f49b5db-a679-4eef-9bf2-8d0275caac12-kube-api-access-tkzd7\") pod \"machine-config-daemon-jzfxc\" (UID: \"2f49b5db-a679-4eef-9bf2-8d0275caac12\") " pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.739377 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jg5\" (UniqueName: \"kubernetes.io/projected/fc38d06d-c458-429d-8dbf-43aab1cd4e57-kube-api-access-85jg5\") pod \"multus-additional-cni-plugins-9mfdf\" (UID: \"fc38d06d-c458-429d-8dbf-43aab1cd4e57\") " pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.741950 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.743327 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfj8r\" (UniqueName: \"kubernetes.io/projected/16c76ea1-575d-492f-b64a-9116b99a5b28-kube-api-access-vfj8r\") pod \"node-ca-qnqx2\" (UID: \"16c76ea1-575d-492f-b64a-9116b99a5b28\") " pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.743605 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbm4b\" (UniqueName: \"kubernetes.io/projected/8549d8ab-08fd-4d10-b03e-d162d745184a-kube-api-access-hbm4b\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.744288 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbld6\" (UniqueName: \"kubernetes.io/projected/3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f-kube-api-access-tbld6\") pod \"multus-bk79c\" (UID: \"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\") " pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.744714 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.749147 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmsn\" (UniqueName: \"kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn\") pod \"ovnkube-node-dr5dp\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.752963 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vhr\" (UniqueName: \"kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr\") pod \"ovnkube-control-plane-57b78d8988-zg4cj\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.753637 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.763056 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.766014 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.766044 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.766054 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.766069 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.766080 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.770537 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.777625 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.779255 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.788041 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.799610 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.803674 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:44 crc kubenswrapper[5104]: else Jan 30 00:11:44 crc kubenswrapper[5104]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:44 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.804762 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.813423 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.827970 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.843325 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:44 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:44 crc kubenswrapper[5104]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:44 crc kubenswrapper[5104]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:44 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:44 crc kubenswrapper[5104]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:44 crc kubenswrapper[5104]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-host=127.0.0.1 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-port=9743 \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ho_enable} \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-interconnect \ Jan 30 00:11:44 crc kubenswrapper[5104]: --disable-approver \ Jan 30 00:11:44 crc kubenswrapper[5104]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:44 crc kubenswrapper[5104]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --loglevel="${LOGLEVEL}" Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.846303 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:44 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --disable-webhook \ Jan 30 00:11:44 crc kubenswrapper[5104]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --loglevel="${LOGLEVEL}" Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.847535 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.853911 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.864332 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-bd7bd4efc5450a7bfda9b52ab1914f0fbf838bdcdde974b214888f7bbdd90321 WatchSource:0}: Error finding container bd7bd4efc5450a7bfda9b52ab1914f0fbf838bdcdde974b214888f7bbdd90321: Status 404 returned error can't find the container with id bd7bd4efc5450a7bfda9b52ab1914f0fbf838bdcdde974b214888f7bbdd90321 Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.864916 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.867598 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.867646 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.867665 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.867693 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.867587 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.867712 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.868898 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.874705 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bk79c" Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.878462 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc38d06d_c458_429d_8dbf_43aab1cd4e57.slice/crio-e8ee2839da3c5d1bead533312a1064154b635caaf9d4c1713e4dca126d25d0b6 WatchSource:0}: Error finding container e8ee2839da3c5d1bead533312a1064154b635caaf9d4c1713e4dca126d25d0b6: Status 404 returned error can't find the container with id e8ee2839da3c5d1bead533312a1064154b635caaf9d4c1713e4dca126d25d0b6 Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.879984 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"bd7bd4efc5450a7bfda9b52ab1914f0fbf838bdcdde974b214888f7bbdd90321"} Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.881892 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"67502f5a0ba4af2ad3d052e096d62181fce5f68321dd3e643a0ccd20fc02c18d"} Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.883350 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85jg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9mfdf_openshift-multus(fc38d06d-c458-429d-8dbf-43aab1cd4e57): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.883424 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"866192d11b07124e29b472c2214de5aae95c85ec9b9f6e0e16e08c17700101dd"} Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.884505 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" podUID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.884713 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.885939 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.886036 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:44 crc kubenswrapper[5104]: else Jan 30 00:11:44 crc kubenswrapper[5104]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:44 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.886469 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:44 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:44 crc kubenswrapper[5104]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:44 crc kubenswrapper[5104]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:44 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:44 crc kubenswrapper[5104]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:44 crc kubenswrapper[5104]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-host=127.0.0.1 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --webhook-port=9743 \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ho_enable} \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-interconnect \ Jan 30 00:11:44 crc kubenswrapper[5104]: --disable-approver \ Jan 30 00:11:44 crc kubenswrapper[5104]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:44 crc kubenswrapper[5104]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --loglevel="${LOGLEVEL}" Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.887185 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.889151 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:44 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --disable-webhook \ Jan 30 00:11:44 crc kubenswrapper[5104]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --loglevel="${LOGLEVEL}" Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.890318 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.890610 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:44 crc kubenswrapper[5104]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:44 crc kubenswrapper[5104]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbld6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bk79c_openshift-multus(3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.891427 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qpj6b" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.892000 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bk79c" podUID="3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.900004 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.906441 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27b37cd2_349b_4e9b_9665_06efa944384c.slice/crio-476bb690b8a4cf60e461e4559da177e627020ccb51cb4eb21cd1e4b8788e04d7 WatchSource:0}: Error finding container 476bb690b8a4cf60e461e4559da177e627020ccb51cb4eb21cd1e4b8788e04d7: Status 404 returned error can't find the container with id 476bb690b8a4cf60e461e4559da177e627020ccb51cb4eb21cd1e4b8788e04d7 Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.909619 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:44 crc kubenswrapper[5104]: set -uo pipefail Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:44 crc kubenswrapper[5104]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:44 crc kubenswrapper[5104]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:44 crc kubenswrapper[5104]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:44 crc kubenswrapper[5104]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:44 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: while true; do Jan 30 00:11:44 crc kubenswrapper[5104]: declare -A svc_ips Jan 30 00:11:44 crc kubenswrapper[5104]: for svc in "${services[@]}"; do Jan 30 00:11:44 crc kubenswrapper[5104]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:44 crc kubenswrapper[5104]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:44 crc kubenswrapper[5104]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:44 crc kubenswrapper[5104]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:44 crc kubenswrapper[5104]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:44 crc kubenswrapper[5104]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:44 crc kubenswrapper[5104]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:44 crc kubenswrapper[5104]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:44 crc kubenswrapper[5104]: for i in ${!cmds[*]} Jan 30 00:11:44 crc kubenswrapper[5104]: do Jan 30 00:11:44 crc kubenswrapper[5104]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:44 crc kubenswrapper[5104]: break Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:44 crc kubenswrapper[5104]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:44 crc kubenswrapper[5104]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:44 crc kubenswrapper[5104]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:44 crc kubenswrapper[5104]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:44 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:44 crc kubenswrapper[5104]: continue Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # Append resolver entries for services Jan 30 00:11:44 crc kubenswrapper[5104]: rc=0 Jan 30 00:11:44 crc kubenswrapper[5104]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:44 crc kubenswrapper[5104]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:44 crc kubenswrapper[5104]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:44 crc kubenswrapper[5104]: continue Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:44 crc kubenswrapper[5104]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:44 crc kubenswrapper[5104]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:44 crc kubenswrapper[5104]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:44 crc kubenswrapper[5104]: unset svc_ips Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-qpj6b_openshift-dns(27b37cd2-349b-4e9b-9665-06efa944384c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.913477 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-qpj6b" podUID="27b37cd2-349b-4e9b-9665-06efa944384c" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.914511 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.924986 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qnqx2" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.925232 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.934347 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.939661 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.943682 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c76ea1_575d_492f_b64a_9116b99a5b28.slice/crio-e44f5d3944afebfe5ca116269d66c885eea62defc1af18bbb2a1a3b05cb6ba0e WatchSource:0}: Error finding container e44f5d3944afebfe5ca116269d66c885eea62defc1af18bbb2a1a3b05cb6ba0e: Status 404 returned error can't find the container with id e44f5d3944afebfe5ca116269d66c885eea62defc1af18bbb2a1a3b05cb6ba0e Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.947098 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:44 crc kubenswrapper[5104]: while [ true ]; Jan 30 00:11:44 crc kubenswrapper[5104]: do Jan 30 00:11:44 crc kubenswrapper[5104]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:44 crc kubenswrapper[5104]: echo $f Jan 30 00:11:44 crc kubenswrapper[5104]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:44 crc kubenswrapper[5104]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:44 crc kubenswrapper[5104]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:44 crc kubenswrapper[5104]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:44 crc kubenswrapper[5104]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:44 crc kubenswrapper[5104]: else Jan 30 00:11:44 crc kubenswrapper[5104]: mkdir $reg_dir_path Jan 30 00:11:44 crc kubenswrapper[5104]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:44 crc kubenswrapper[5104]: echo $d Jan 30 00:11:44 crc kubenswrapper[5104]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:44 crc kubenswrapper[5104]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:44 crc kubenswrapper[5104]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:44 crc kubenswrapper[5104]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: sleep 60 & wait ${!} Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfj8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-qnqx2_openshift-image-registry(16c76ea1-575d-492f-b64a-9116b99a5b28): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.948012 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.948753 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-qnqx2" podUID="16c76ea1-575d-492f-b64a-9116b99a5b28" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.950113 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.954921 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.961710 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dd9b451_9f5e_4822_b340_7557a89a3ce0.slice/crio-5baac66e9d4e1ec5572319df2731c814892b2924f32a077f78cd3f4ac1cc77f7 WatchSource:0}: Error finding container 5baac66e9d4e1ec5572319df2731c814892b2924f32a077f78cd3f4ac1cc77f7: Status 404 returned error can't find the container with id 5baac66e9d4e1ec5572319df2731c814892b2924f32a077f78cd3f4ac1cc77f7 Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.965338 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.969893 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:44 crc kubenswrapper[5104]: apiVersion: v1 Jan 30 00:11:44 crc kubenswrapper[5104]: clusters: Jan 30 00:11:44 crc kubenswrapper[5104]: - cluster: Jan 30 00:11:44 crc kubenswrapper[5104]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:44 crc kubenswrapper[5104]: server: https://api-int.crc.testing:6443 Jan 30 00:11:44 crc kubenswrapper[5104]: name: default-cluster Jan 30 00:11:44 crc kubenswrapper[5104]: contexts: Jan 30 00:11:44 crc kubenswrapper[5104]: - context: Jan 30 00:11:44 crc kubenswrapper[5104]: cluster: default-cluster Jan 30 00:11:44 crc kubenswrapper[5104]: namespace: default Jan 30 00:11:44 crc kubenswrapper[5104]: user: default-auth Jan 30 00:11:44 crc kubenswrapper[5104]: name: default-context Jan 30 00:11:44 crc kubenswrapper[5104]: current-context: default-context Jan 30 00:11:44 crc kubenswrapper[5104]: kind: Config Jan 30 00:11:44 crc kubenswrapper[5104]: preferences: {} Jan 30 00:11:44 crc kubenswrapper[5104]: users: Jan 30 00:11:44 crc kubenswrapper[5104]: - name: default-auth Jan 30 00:11:44 crc kubenswrapper[5104]: user: Jan 30 00:11:44 crc kubenswrapper[5104]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:44 crc kubenswrapper[5104]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:44 crc kubenswrapper[5104]: EOF Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkmsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-dr5dp_openshift-ovn-kubernetes(4dd9b451-9f5e-4822-b340-7557a89a3ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.971066 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.971100 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.971147 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.971166 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.971194 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.971214 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.979139 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f49b5db_a679_4eef_9bf2_8d0275caac12.slice/crio-a8939d7f9bc27e764589d85b4aa53c225b25a7c39b54040c60830c16633165ba WatchSource:0}: Error finding container a8939d7f9bc27e764589d85b4aa53c225b25a7c39b54040c60830c16633165ba: Status 404 returned error can't find the container with id a8939d7f9bc27e764589d85b4aa53c225b25a7c39b54040c60830c16633165ba Jan 30 00:11:44 crc kubenswrapper[5104]: W0130 00:11:44.979617 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925f8c53_ccbf_4f3c_a811_4d64d678e217.slice/crio-6b26fb8323c1fa1f41e9b6f71949dc98a78c2d69b578258e6da79a7d9af02855 WatchSource:0}: Error finding container 6b26fb8323c1fa1f41e9b6f71949dc98a78c2d69b578258e6da79a7d9af02855: Status 404 returned error can't find the container with id 6b26fb8323c1fa1f41e9b6f71949dc98a78c2d69b578258e6da79a7d9af02855 Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.981053 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.983491 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:44 crc kubenswrapper[5104]: set -euo pipefail Jan 30 00:11:44 crc kubenswrapper[5104]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:44 crc kubenswrapper[5104]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:44 crc kubenswrapper[5104]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:44 crc kubenswrapper[5104]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:44 crc kubenswrapper[5104]: TS=$(date +%s) Jan 30 00:11:44 crc kubenswrapper[5104]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:44 crc kubenswrapper[5104]: HAS_LOGGED_INFO=0 Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: log_missing_certs(){ Jan 30 00:11:44 crc kubenswrapper[5104]: CUR_TS=$(date +%s) Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:44 crc kubenswrapper[5104]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:44 crc kubenswrapper[5104]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:44 crc kubenswrapper[5104]: HAS_LOGGED_INFO=1 Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: } Jan 30 00:11:44 crc kubenswrapper[5104]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:44 crc kubenswrapper[5104]: log_missing_certs Jan 30 00:11:44 crc kubenswrapper[5104]: sleep 5 Jan 30 00:11:44 crc kubenswrapper[5104]: done Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:44 crc kubenswrapper[5104]: --logtostderr \ Jan 30 00:11:44 crc kubenswrapper[5104]: --secure-listen-address=:9108 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:44 crc kubenswrapper[5104]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:44 crc kubenswrapper[5104]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:44 crc kubenswrapper[5104]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-zg4cj_openshift-ovn-kubernetes(925f8c53-ccbf-4f3c-a811-4d64d678e217): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.983492 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkzd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jzfxc_openshift-machine-config-operator(2f49b5db-a679-4eef-9bf2-8d0275caac12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.986471 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkzd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jzfxc_openshift-machine-config-operator(2f49b5db-a679-4eef-9bf2-8d0275caac12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.986504 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:44 crc kubenswrapper[5104]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:44 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v4_join_subnet_opt= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v6_join_subnet_opt= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: dns_name_resolver_enabled_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:44 crc kubenswrapper[5104]: # will rollout control plane pods as well Jan 30 00:11:44 crc kubenswrapper[5104]: network_segmentation_enabled_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: multi_network_enabled_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "true" != "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: route_advertisements_enable_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:44 crc kubenswrapper[5104]: multi_network_policy_enabled_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:44 crc kubenswrapper[5104]: admin_network_policy_enabled_flag= Jan 30 00:11:44 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:44 crc kubenswrapper[5104]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: if [ "shared" == "shared" ]; then Jan 30 00:11:44 crc kubenswrapper[5104]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:44 crc kubenswrapper[5104]: elif [ "shared" == "local" ]; then Jan 30 00:11:44 crc kubenswrapper[5104]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:44 crc kubenswrapper[5104]: else Jan 30 00:11:44 crc kubenswrapper[5104]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:44 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:44 crc kubenswrapper[5104]: fi Jan 30 00:11:44 crc kubenswrapper[5104]: Jan 30 00:11:44 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:44 crc kubenswrapper[5104]: exec /usr/bin/ovnkube \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-interconnect \ Jan 30 00:11:44 crc kubenswrapper[5104]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:44 crc kubenswrapper[5104]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:44 crc kubenswrapper[5104]: --metrics-enable-pprof \ Jan 30 00:11:44 crc kubenswrapper[5104]: --metrics-enable-config-duration \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${multi_network_enabled_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${gateway_mode_flags} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${route_advertisements_enable_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-egress-ip=true \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-egress-firewall=true \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-egress-qos=true \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-egress-service=true \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-multicast \ Jan 30 00:11:44 crc kubenswrapper[5104]: --enable-multi-external-gateway=true \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:44 crc kubenswrapper[5104]: ${admin_network_policy_enabled_flag} Jan 30 00:11:44 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-zg4cj_openshift-ovn-kubernetes(925f8c53-ccbf-4f3c-a811-4d64d678e217): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:44 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.988005 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" Jan 30 00:11:44 crc kubenswrapper[5104]: E0130 00:11:44.989167 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" Jan 30 00:11:44 crc kubenswrapper[5104]: I0130 00:11:44.993815 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.007275 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.015163 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.027421 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.041461 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.053168 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.061838 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.074310 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.074378 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.074394 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.074413 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.074430 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.084412 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.129185 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.143282 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.143372 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.143413 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.143449 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143503 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.143474903 +0000 UTC m=+86.875814122 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.143548 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143570 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143588 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143599 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143670 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.143653288 +0000 UTC m=+86.875992507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143721 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143761 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.143754291 +0000 UTC m=+86.876093510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143836 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143846 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143875 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143901 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.143894434 +0000 UTC m=+86.876233653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.143938 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.144029 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.144022228 +0000 UTC m=+86.876361447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.159714 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.176273 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.176332 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.176341 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.176355 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.176364 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.218148 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.239761 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.244462 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.244617 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.244733 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:11:46.244710926 +0000 UTC m=+86.977050155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.278013 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.278064 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.278076 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.278094 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.278108 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.279631 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.320122 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.361892 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.380496 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.380536 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.380545 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.380559 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.380569 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.399667 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.447761 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.482506 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.482588 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.482619 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.482649 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.482674 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.485116 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.522832 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.565097 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.585378 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.585678 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.585819 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.586056 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.586288 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.600742 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.649351 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.685469 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.691748 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.691819 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.691833 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.691879 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.691895 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.724489 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.761660 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.794472 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.794543 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.794563 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.794607 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.794625 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.814907 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.844034 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.888598 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qpj6b" event={"ID":"27b37cd2-349b-4e9b-9665-06efa944384c","Type":"ContainerStarted","Data":"476bb690b8a4cf60e461e4559da177e627020ccb51cb4eb21cd1e4b8788e04d7"} Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.888656 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.890929 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"a8939d7f9bc27e764589d85b4aa53c225b25a7c39b54040c60830c16633165ba"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.894224 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkzd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jzfxc_openshift-machine-config-operator(2f49b5db-a679-4eef-9bf2-8d0275caac12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.895773 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"5baac66e9d4e1ec5572319df2731c814892b2924f32a077f78cd3f4ac1cc77f7"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.898165 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:45 crc kubenswrapper[5104]: set -uo pipefail Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:45 crc kubenswrapper[5104]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:45 crc kubenswrapper[5104]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:45 crc kubenswrapper[5104]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:45 crc kubenswrapper[5104]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:45 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: while true; do Jan 30 00:11:45 crc kubenswrapper[5104]: declare -A svc_ips Jan 30 00:11:45 crc kubenswrapper[5104]: for svc in "${services[@]}"; do Jan 30 00:11:45 crc kubenswrapper[5104]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:45 crc kubenswrapper[5104]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:45 crc kubenswrapper[5104]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:45 crc kubenswrapper[5104]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:45 crc kubenswrapper[5104]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:45 crc kubenswrapper[5104]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:45 crc kubenswrapper[5104]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:45 crc kubenswrapper[5104]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:45 crc kubenswrapper[5104]: for i in ${!cmds[*]} Jan 30 00:11:45 crc kubenswrapper[5104]: do Jan 30 00:11:45 crc kubenswrapper[5104]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:45 crc kubenswrapper[5104]: break Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:45 crc kubenswrapper[5104]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:45 crc kubenswrapper[5104]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:45 crc kubenswrapper[5104]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:45 crc kubenswrapper[5104]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:45 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:45 crc kubenswrapper[5104]: continue Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # Append resolver entries for services Jan 30 00:11:45 crc kubenswrapper[5104]: rc=0 Jan 30 00:11:45 crc kubenswrapper[5104]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:45 crc kubenswrapper[5104]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:45 crc kubenswrapper[5104]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:45 crc kubenswrapper[5104]: continue Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:45 crc kubenswrapper[5104]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:45 crc kubenswrapper[5104]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:45 crc kubenswrapper[5104]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: sleep 60 & wait Jan 30 00:11:45 crc kubenswrapper[5104]: unset svc_ips Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57wnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-qpj6b_openshift-dns(27b37cd2-349b-4e9b-9665-06efa944384c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.898614 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.898657 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.899038 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.899074 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.899099 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.899273 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:45 crc kubenswrapper[5104]: apiVersion: v1 Jan 30 00:11:45 crc kubenswrapper[5104]: clusters: Jan 30 00:11:45 crc kubenswrapper[5104]: - cluster: Jan 30 00:11:45 crc kubenswrapper[5104]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:45 crc kubenswrapper[5104]: server: https://api-int.crc.testing:6443 Jan 30 00:11:45 crc kubenswrapper[5104]: name: default-cluster Jan 30 00:11:45 crc kubenswrapper[5104]: contexts: Jan 30 00:11:45 crc kubenswrapper[5104]: - context: Jan 30 00:11:45 crc kubenswrapper[5104]: cluster: default-cluster Jan 30 00:11:45 crc kubenswrapper[5104]: namespace: default Jan 30 00:11:45 crc kubenswrapper[5104]: user: default-auth Jan 30 00:11:45 crc kubenswrapper[5104]: name: default-context Jan 30 00:11:45 crc kubenswrapper[5104]: current-context: default-context Jan 30 00:11:45 crc kubenswrapper[5104]: kind: Config Jan 30 00:11:45 crc kubenswrapper[5104]: preferences: {} Jan 30 00:11:45 crc kubenswrapper[5104]: users: Jan 30 00:11:45 crc kubenswrapper[5104]: - name: default-auth Jan 30 00:11:45 crc kubenswrapper[5104]: user: Jan 30 00:11:45 crc kubenswrapper[5104]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:45 crc kubenswrapper[5104]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:45 crc kubenswrapper[5104]: EOF Jan 30 00:11:45 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkmsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-dr5dp_openshift-ovn-kubernetes(4dd9b451-9f5e-4822-b340-7557a89a3ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.899318 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-qpj6b" podUID="27b37cd2-349b-4e9b-9665-06efa944384c" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.899314 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkzd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jzfxc_openshift-machine-config-operator(2f49b5db-a679-4eef-9bf2-8d0275caac12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.900088 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerStarted","Data":"e8ee2839da3c5d1bead533312a1064154b635caaf9d4c1713e4dca126d25d0b6"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.900464 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.900465 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.902569 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerStarted","Data":"6b26fb8323c1fa1f41e9b6f71949dc98a78c2d69b578258e6da79a7d9af02855"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.903077 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85jg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9mfdf_openshift-multus(fc38d06d-c458-429d-8dbf-43aab1cd4e57): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.904086 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qnqx2" event={"ID":"16c76ea1-575d-492f-b64a-9116b99a5b28","Type":"ContainerStarted","Data":"e44f5d3944afebfe5ca116269d66c885eea62defc1af18bbb2a1a3b05cb6ba0e"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.904498 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" podUID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.904505 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:45 crc kubenswrapper[5104]: set -euo pipefail Jan 30 00:11:45 crc kubenswrapper[5104]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:45 crc kubenswrapper[5104]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:45 crc kubenswrapper[5104]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:45 crc kubenswrapper[5104]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:45 crc kubenswrapper[5104]: TS=$(date +%s) Jan 30 00:11:45 crc kubenswrapper[5104]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:45 crc kubenswrapper[5104]: HAS_LOGGED_INFO=0 Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: log_missing_certs(){ Jan 30 00:11:45 crc kubenswrapper[5104]: CUR_TS=$(date +%s) Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:45 crc kubenswrapper[5104]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:45 crc kubenswrapper[5104]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:45 crc kubenswrapper[5104]: HAS_LOGGED_INFO=1 Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: } Jan 30 00:11:45 crc kubenswrapper[5104]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:45 crc kubenswrapper[5104]: log_missing_certs Jan 30 00:11:45 crc kubenswrapper[5104]: sleep 5 Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:45 crc kubenswrapper[5104]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:45 crc kubenswrapper[5104]: --logtostderr \ Jan 30 00:11:45 crc kubenswrapper[5104]: --secure-listen-address=:9108 \ Jan 30 00:11:45 crc kubenswrapper[5104]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:45 crc kubenswrapper[5104]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:45 crc kubenswrapper[5104]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:45 crc kubenswrapper[5104]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:45 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-zg4cj_openshift-ovn-kubernetes(925f8c53-ccbf-4f3c-a811-4d64d678e217): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.906012 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bk79c" event={"ID":"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f","Type":"ContainerStarted","Data":"e721f42bd258597f5a768b6d0b6c2b976cc84f9c48386b954ed3d4c3e34ed05d"} Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.906438 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:45 crc kubenswrapper[5104]: while [ true ]; Jan 30 00:11:45 crc kubenswrapper[5104]: do Jan 30 00:11:45 crc kubenswrapper[5104]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:45 crc kubenswrapper[5104]: echo $f Jan 30 00:11:45 crc kubenswrapper[5104]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:45 crc kubenswrapper[5104]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:45 crc kubenswrapper[5104]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:45 crc kubenswrapper[5104]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:45 crc kubenswrapper[5104]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:45 crc kubenswrapper[5104]: else Jan 30 00:11:45 crc kubenswrapper[5104]: mkdir $reg_dir_path Jan 30 00:11:45 crc kubenswrapper[5104]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:45 crc kubenswrapper[5104]: echo $d Jan 30 00:11:45 crc kubenswrapper[5104]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:45 crc kubenswrapper[5104]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:45 crc kubenswrapper[5104]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:45 crc kubenswrapper[5104]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: sleep 60 & wait ${!} Jan 30 00:11:45 crc kubenswrapper[5104]: done Jan 30 00:11:45 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfj8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-qnqx2_openshift-image-registry(16c76ea1-575d-492f-b64a-9116b99a5b28): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.907741 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-qnqx2" podUID="16c76ea1-575d-492f-b64a-9116b99a5b28" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.907801 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: set -o allexport Jan 30 00:11:45 crc kubenswrapper[5104]: source "/env/_master" Jan 30 00:11:45 crc kubenswrapper[5104]: set +o allexport Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v4_join_subnet_opt= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v6_join_subnet_opt= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "" != "" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: dns_name_resolver_enabled_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:45 crc kubenswrapper[5104]: # will rollout control plane pods as well Jan 30 00:11:45 crc kubenswrapper[5104]: network_segmentation_enabled_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: multi_network_enabled_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "true" != "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: route_advertisements_enable_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:45 crc kubenswrapper[5104]: multi_network_policy_enabled_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "false" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:45 crc kubenswrapper[5104]: admin_network_policy_enabled_flag= Jan 30 00:11:45 crc kubenswrapper[5104]: if [[ "true" == "true" ]]; then Jan 30 00:11:45 crc kubenswrapper[5104]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: if [ "shared" == "shared" ]; then Jan 30 00:11:45 crc kubenswrapper[5104]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:45 crc kubenswrapper[5104]: elif [ "shared" == "local" ]; then Jan 30 00:11:45 crc kubenswrapper[5104]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:45 crc kubenswrapper[5104]: else Jan 30 00:11:45 crc kubenswrapper[5104]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:45 crc kubenswrapper[5104]: exit 1 Jan 30 00:11:45 crc kubenswrapper[5104]: fi Jan 30 00:11:45 crc kubenswrapper[5104]: Jan 30 00:11:45 crc kubenswrapper[5104]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:45 crc kubenswrapper[5104]: exec /usr/bin/ovnkube \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-interconnect \ Jan 30 00:11:45 crc kubenswrapper[5104]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:45 crc kubenswrapper[5104]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:45 crc kubenswrapper[5104]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:45 crc kubenswrapper[5104]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:45 crc kubenswrapper[5104]: --metrics-enable-pprof \ Jan 30 00:11:45 crc kubenswrapper[5104]: --metrics-enable-config-duration \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${multi_network_enabled_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${gateway_mode_flags} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${route_advertisements_enable_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-egress-ip=true \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-egress-firewall=true \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-egress-qos=true \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-egress-service=true \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-multicast \ Jan 30 00:11:45 crc kubenswrapper[5104]: --enable-multi-external-gateway=true \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:45 crc kubenswrapper[5104]: ${admin_network_policy_enabled_flag} Jan 30 00:11:45 crc kubenswrapper[5104]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-zg4cj_openshift-ovn-kubernetes(925f8c53-ccbf-4f3c-a811-4d64d678e217): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.907953 5104 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:45 crc kubenswrapper[5104]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:45 crc kubenswrapper[5104]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:45 crc kubenswrapper[5104]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbld6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bk79c_openshift-multus(3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:45 crc kubenswrapper[5104]: > logger="UnhandledError" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.909227 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bk79c" podUID="3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f" Jan 30 00:11:45 crc kubenswrapper[5104]: E0130 00:11:45.909316 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.956214 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:45 crc kubenswrapper[5104]: I0130 00:11:45.976300 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.000439 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.001250 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.001319 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.001338 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.001368 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.001385 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.043302 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.086402 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.104143 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.104226 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.104270 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.104307 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.104330 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.120923 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.157921 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.158028 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.158074 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.158115 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158276 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.158304 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158400 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158466 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.15843588 +0000 UTC m=+88.890775139 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158591 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158611 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158631 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158659 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.158591394 +0000 UTC m=+88.890930653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158705 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.158689566 +0000 UTC m=+88.891028825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158764 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.158733297 +0000 UTC m=+88.891072566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158788 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158808 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158823 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.158911 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.158889033 +0000 UTC m=+88.891228332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.160173 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.203225 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.207001 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.207063 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.207081 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.207106 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.207126 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.244531 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.259795 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.260036 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.260157 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:11:48.260127975 +0000 UTC m=+88.992467234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.280973 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.309789 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.309841 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.309879 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.309899 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.309911 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.320837 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.362483 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.409821 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.411919 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.411972 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.411985 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.412004 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.412015 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.443207 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.481497 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.514054 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.514136 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.514159 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.514184 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.514203 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.521565 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.525035 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.525106 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.525065 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.525038 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.525354 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.525535 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.526031 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:46 crc kubenswrapper[5104]: E0130 00:11:46.526114 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.532127 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.533596 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.537297 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.544843 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.564491 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.566483 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.601584 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.617416 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.617481 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.617494 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.617514 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.617527 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.618482 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.621817 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.624521 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.626662 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.646993 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.678427 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.680925 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.682798 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.686543 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.689583 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.695296 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.697838 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.699678 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.702419 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.705007 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.707497 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.709816 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.715019 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.720531 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.721154 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.721321 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.721479 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.721575 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.721652 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.727998 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.730051 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.743074 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.744228 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.746148 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.748264 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.750221 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.753547 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.754497 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.759634 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.759619 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.761588 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.764725 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.767307 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.769010 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.770357 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.771562 5104 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.771780 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.776700 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.779106 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.781127 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.783262 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.784006 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.786077 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.787042 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.789234 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.790527 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.792707 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.794535 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.796173 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.797794 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.799492 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.800687 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.802788 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.805912 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.807710 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.809124 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.811159 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.824487 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.824571 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.824591 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.824615 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.824634 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.926782 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.926903 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.926931 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.926959 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5104]: I0130 00:11:46.926980 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.030089 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.030162 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.030181 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.030204 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.030222 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.133353 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.133656 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.133781 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.133942 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.134061 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.237170 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.237234 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.237257 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.237281 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.237298 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.339786 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.339844 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.339917 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.339940 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.339955 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.442737 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.442833 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.442897 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.442940 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.442981 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.546107 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.546498 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.546772 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.547098 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.547382 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.651613 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.651688 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.651712 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.651741 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.651766 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.754915 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.754975 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.754989 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.755009 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.755022 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.857022 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.857150 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.857205 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.857237 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.857293 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.960898 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.960964 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.960977 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.961013 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5104]: I0130 00:11:47.961033 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.063692 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.063744 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.063758 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.063776 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.063789 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.166023 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.166097 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.166115 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.166137 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.166155 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.180817 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.180933 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.180910562 +0000 UTC m=+92.913249781 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.180989 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.181029 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.181066 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.181098 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181207 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181222 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181236 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181275 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.181265922 +0000 UTC m=+92.913605151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181619 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181633 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181641 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181672 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.181662793 +0000 UTC m=+92.914002012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181707 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181731 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.181723205 +0000 UTC m=+92.914062424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181774 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.181799 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.181792307 +0000 UTC m=+92.914131526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.267958 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.268004 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.268013 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.268027 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.268040 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.282164 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.282313 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.282387 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:11:52.282368172 +0000 UTC m=+93.014707391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.371146 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.371248 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.371276 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.371304 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.371323 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.474286 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.474353 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.474374 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.474402 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.474422 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.525548 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.525592 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.525597 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.525768 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.525943 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.526251 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.526425 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:48 crc kubenswrapper[5104]: E0130 00:11:48.526167 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.576450 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.576744 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.576920 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.577070 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.577181 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.679146 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.679201 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.679213 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.679229 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.679239 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.781922 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.781970 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.781981 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.781997 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.782008 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.884142 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.884189 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.884202 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.884217 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.884226 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.986804 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.986843 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.986885 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.986907 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5104]: I0130 00:11:48.986918 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.089734 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.089805 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.089825 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.089894 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.089921 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.192139 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.192186 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.192200 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.192217 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.192229 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.293880 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.293938 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.293952 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.293971 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.293987 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.397013 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.397102 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.397127 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.397159 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.397186 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.500236 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.500283 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.500294 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.500309 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.500321 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.602382 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.602425 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.602434 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.602452 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.602462 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.705259 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.705311 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.705372 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.705405 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.705421 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.807788 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.807908 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.807967 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.807993 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.808058 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.909882 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.909950 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.909968 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.909993 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5104]: I0130 00:11:49.910011 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.012461 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.012533 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.012548 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.012565 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.012577 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.115080 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.115174 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.115185 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.115203 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.115215 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.217660 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.217734 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.217760 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.217897 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.217930 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.320498 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.320566 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.320593 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.320617 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.320637 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.422512 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.422556 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.422565 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.422578 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.422588 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.524905 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.525025 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:50 crc kubenswrapper[5104]: E0130 00:11:50.525159 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.525658 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:50 crc kubenswrapper[5104]: E0130 00:11:50.526110 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.526298 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:50 crc kubenswrapper[5104]: E0130 00:11:50.526508 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:50 crc kubenswrapper[5104]: E0130 00:11:50.526615 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.527292 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.527429 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.527522 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.527820 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.528002 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.549677 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.567112 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.581594 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.591001 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.608364 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.617438 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.631976 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.632913 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.632973 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.632992 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.633014 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.633030 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.642411 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.659517 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.670709 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.684388 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.699183 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.708797 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.717491 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.735056 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.736628 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.736662 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.736712 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.736730 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.736743 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.745797 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.758613 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.769131 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.780624 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.839529 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.839595 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.839613 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.839638 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.839655 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.941662 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.941719 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.941737 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.941761 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5104]: I0130 00:11:50.941779 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.044642 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.044706 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.044725 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.044750 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.044768 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.148424 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.148654 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.148745 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.148778 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.148804 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.250955 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.251012 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.251028 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.251047 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.251060 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.353187 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.353254 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.353272 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.353298 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.353317 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.455687 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.455725 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.455733 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.455745 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.455753 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.558148 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.558223 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.558252 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.558276 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.558290 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.660762 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.660810 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.660822 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.660842 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.660875 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.763887 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.763947 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.763959 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.763976 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.763989 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.866698 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.866753 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.866762 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.866777 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.866790 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.968515 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.968558 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.968569 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.968582 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:51 crc kubenswrapper[5104]: I0130 00:11:51.968593 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:51Z","lastTransitionTime":"2026-01-30T00:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.071059 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.071130 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.071149 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.071176 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.071195 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.174274 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.174371 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.174399 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.174430 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.174458 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.225076 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.225366 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.225489 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.225501 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.225416786 +0000 UTC m=+100.957756055 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.225570 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.22554894 +0000 UTC m=+100.957888199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.225923 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226041 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.226014422 +0000 UTC m=+100.958353691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.225665 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.226229 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.226402 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226445 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226472 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226492 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226562 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.226544296 +0000 UTC m=+100.958883555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226692 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226764 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226782 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.226925 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.226883946 +0000 UTC m=+100.959223165 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.277607 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.277677 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.277695 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.277719 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.277736 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.328215 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.328470 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.328622 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.328575421 +0000 UTC m=+101.060914650 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.379909 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.379976 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.379988 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.380012 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.380030 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.421282 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.421343 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.421355 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.421370 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.421385 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.434362 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.438272 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.438327 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.438340 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.438359 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.438372 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.448439 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.451947 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.451992 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.452011 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.452034 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.452050 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.463992 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.472822 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.472882 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.472894 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.472911 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.472923 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.482071 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.485761 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.485810 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.485828 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.485882 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.485901 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.496648 5104 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ddbe5ca8-cca6-45e8-a308-ea9fc8d3013e\\\",\\\"systemUUID\\\":\\\"6d24271c-4d6f-4082-96cf-a2854971c0dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.496759 5104 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.498290 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.498328 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.498338 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.498352 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.498362 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.525024 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.525084 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.525174 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.525180 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.525361 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.525539 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.525588 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:52 crc kubenswrapper[5104]: E0130 00:11:52.525804 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.600670 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.600743 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.600763 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.600789 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.600807 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.703925 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.704000 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.704027 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.704135 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.704240 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.807533 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.807585 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.807598 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.807617 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.807629 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.909909 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.909951 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.909970 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.909986 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:52 crc kubenswrapper[5104]: I0130 00:11:52.909998 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:52Z","lastTransitionTime":"2026-01-30T00:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.011952 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.012057 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.012078 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.012103 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.012121 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.114800 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.114912 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.114939 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.114971 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.114994 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.217291 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.217552 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.217577 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.217609 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.217649 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.321051 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.321129 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.321143 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.321182 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.321198 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.423437 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.423514 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.423534 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.423558 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.423579 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.526996 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.527059 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.527070 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.527088 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.527101 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.637696 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.637747 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.637760 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.637777 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.637790 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.740458 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.740495 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.740504 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.740519 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.740529 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.820718 5104 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.842448 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.842490 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.842501 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.842517 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.842531 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.944584 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.944632 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.944645 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.944665 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:53 crc kubenswrapper[5104]: I0130 00:11:53.944678 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:53Z","lastTransitionTime":"2026-01-30T00:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.046969 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.047002 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.047011 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.047025 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.047034 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.149611 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.149658 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.149671 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.149688 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.149711 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.251680 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.251726 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.251737 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.251752 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.251761 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.354203 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.354269 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.354287 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.354311 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.354329 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.457034 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.457107 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.457125 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.457152 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.457172 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.525163 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.525316 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.525228 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.525206 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:54 crc kubenswrapper[5104]: E0130 00:11:54.525738 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:54 crc kubenswrapper[5104]: E0130 00:11:54.525816 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:54 crc kubenswrapper[5104]: E0130 00:11:54.525917 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:54 crc kubenswrapper[5104]: E0130 00:11:54.525927 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.559757 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.559843 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.559897 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.559926 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.559950 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.662131 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.662202 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.662221 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.662246 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.662266 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.765173 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.765273 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.765300 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.765330 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.765353 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.868080 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.868171 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.868198 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.868245 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.868287 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.971335 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.971426 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.971453 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.971484 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:54 crc kubenswrapper[5104]: I0130 00:11:54.971506 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:54Z","lastTransitionTime":"2026-01-30T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.074004 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.074141 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.074169 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.074202 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.074232 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.176303 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.176388 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.176402 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.176420 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.176432 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.278528 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.278577 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.278590 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.278606 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.278616 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.381377 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.381469 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.381479 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.381501 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.381516 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.484000 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.484070 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.484081 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.484100 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.484113 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.586397 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.586476 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.586497 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.586523 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.586543 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.688674 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.689100 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.689114 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.689154 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.689168 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.790456 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.790522 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.790540 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.790562 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.790581 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.896166 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.896213 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.896224 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.896238 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.896247 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.935702 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d6ddf6c6543aa5a588c640745ab3ddf74ea20a530e2fb5e1dd15bb26fed04d66"} Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.949189 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.958486 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.974112 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.983934 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.993459 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.997832 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.997895 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.997905 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.997919 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:55 crc kubenswrapper[5104]: I0130 00:11:55.997929 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:55Z","lastTransitionTime":"2026-01-30T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.019254 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.033747 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.047244 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.059173 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.083314 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.095629 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.099603 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.099673 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.099694 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.099719 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.099737 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.112763 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.123360 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.139587 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.150266 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.164490 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.175455 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d6ddf6c6543aa5a588c640745ab3ddf74ea20a530e2fb5e1dd15bb26fed04d66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.183227 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.189747 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.202479 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.202535 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.202553 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.202576 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.202593 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.304896 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.304972 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.304993 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.305020 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.305061 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.407906 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.407978 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.408003 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.408038 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.408062 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.510935 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.511010 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.511022 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.511043 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.511060 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.525431 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:56 crc kubenswrapper[5104]: E0130 00:11:56.525631 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.525707 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.525722 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:56 crc kubenswrapper[5104]: E0130 00:11:56.526014 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:56 crc kubenswrapper[5104]: E0130 00:11:56.526183 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.526581 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:56 crc kubenswrapper[5104]: E0130 00:11:56.526758 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.614499 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.614579 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.614604 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.614634 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.614659 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.718579 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.718647 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.718659 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.718688 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.718703 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.822183 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.822250 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.822269 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.822294 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.822311 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.925706 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.925791 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.925816 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.925881 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:56 crc kubenswrapper[5104]: I0130 00:11:56.925904 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:56Z","lastTransitionTime":"2026-01-30T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.028600 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.028654 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.028667 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.028685 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.028699 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.131318 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.131590 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.131715 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.131805 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.131905 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.234556 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.234597 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.234608 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.234622 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.234632 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.337985 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.338312 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.338449 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.338623 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.338775 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.441038 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.441292 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.441380 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.441665 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.441753 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.544774 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.544819 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.544833 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.544871 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.544887 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.649071 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.649367 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.649541 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.649710 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.649937 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.753078 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.753146 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.753171 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.753201 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.753223 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.855659 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.855720 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.855740 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.855763 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.855781 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.949393 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bk79c" event={"ID":"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f","Type":"ContainerStarted","Data":"33a5e4f0b9727f64dc777e52dfe8a3658603775c843c9fdba0764b55e730ba77"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.959716 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.959783 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.959804 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.959836 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.959896 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:57Z","lastTransitionTime":"2026-01-30T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.965623 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:57 crc kubenswrapper[5104]: I0130 00:11:57.976352 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"925f8c53-ccbf-4f3c-a811-4d64d678e217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4vhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-zg4cj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.005069 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1afc018-4e45-49c3-a326-4068c590483b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ea0c352cb0e6754f7a7b428ac74c8d1d59af3fcd309fead8f147b31fc9d84b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba7755cd1898e33390a59405284ca9bc8ab6567dee2e7c1134c9093d25ae341f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://560668f5a529df74a5be2ea17dcc5c09bd64122a4f78def29e8d38b4f098ec64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10172da6a5c353a3c321326f80b9af59fe5c6acdb48f8951f30401fa25fde394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a8af248a457a824a347b4bacdb934ce6f91151e6814ba046ecfd0b2f9fef1c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3dfaca6ebdcc7e86e59721d7b1c4e7825a4a23ea5ee58dc5b1445a63994b711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c9d3ad0af1dbe7691b30eb224da8a661baeac16b755dc1fccf77c90dda404a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd29f88f850f66d48dcb41d9ff4b6ed03ce53947fcf1d89e94eb89734d32a9af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.019197 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.030834 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.042943 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8549d8ab-08fd-4d10-b03e-d162d745184a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbm4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvjb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.060521 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd9b451-9f5e-4822-b340-7557a89a3ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dr5dp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.074673 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f49b5db-a679-4eef-9bf2-8d0275caac12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.079180 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.079245 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.079257 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.079305 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.079319 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.088103 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a53efdae-bb47-4e91-8fd9-aa3ce42e07fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:11Z\\\",\\\"message\\\":\\\"rue\\\\nI0130 00:11:10.550402 1 observer_polling.go:159] Starting file observer\\\\nW0130 00:11:10.562731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:10.562933 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:10.564179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712869688/tls.crt::/tmp/serving-cert-1712869688/tls.key\\\\\\\"\\\\nI0130 00:11:11.630504 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:11.635621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:11.635658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:11.635724 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:11.635735 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:11.641959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 00:11:11.642000 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0130 00:11:11.642008 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:11.642013 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:11.642036 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:11.642046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:11.642053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:11.642062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:11.644944 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.098931 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0a5f88e-2cb1-4067-82fa-dd04127fe6a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://446c0914d8c5bcbe4b931fac391de5327afb0740f5a647ff10bfa8ae3718070a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991a729c5e18b1bfa18b949f180147804f656e534eed823b6cfd848589448a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.117044 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc38d06d-c458-429d-8dbf-43aab1cd4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85jg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9mfdf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.124687 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qpj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27b37cd2-349b-4e9b-9665-06efa944384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57wnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qpj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.137970 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eafa3f8d-ea5b-4973-b2fe-537afe846212\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39527af563278eaf7f4de232e9b050b0a2a37b4f221fbcff6253ffbfc6a6db05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b0901a202d5e8c1e87d98c3af50e89ff2f04e3048aa45f79db8a23a1020c0178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6472a07b9e0d1d4d2094e9fe4464e17f6230a2915a19bb59bd54df043380b9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://986b21b22cd3bdf35c46b74e23ebf17435e4f31f7fc4cb8270e7bef6c7d3aeb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.151077 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d6ddf6c6543aa5a588c640745ab3ddf74ea20a530e2fb5e1dd15bb26fed04d66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.166599 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.180922 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnqx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16c76ea1-575d-492f-b64a-9116b99a5b28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfj8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnqx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.181937 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.182006 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.182028 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.182058 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.182082 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.195097 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea287dd4-000d-4cad-8964-eea48612652e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://383804c6c2c049cb0469a54bdc63fa42ec853ada3540352b5520d7b25d1da994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bea04eb937eda3bc23c54503bd818434d7a6f7fab1b23383843cc7bf8379462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00629bfc42e0323311bb23b075167b46d96260c873bb2179d4b4e10a20c048ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.214278 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.231284 5104 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bk79c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://33a5e4f0b9727f64dc777e52dfe8a3658603775c843c9fdba0764b55e730ba77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbld6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bk79c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.284825 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.284930 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.284950 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.284981 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.284997 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.386634 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.386683 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.386692 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.386705 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.386715 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.488384 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.488431 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.488444 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.488461 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.488473 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.525236 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.525292 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.525244 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:11:58 crc kubenswrapper[5104]: E0130 00:11:58.525413 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:58 crc kubenswrapper[5104]: E0130 00:11:58.525598 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.525681 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:58 crc kubenswrapper[5104]: E0130 00:11:58.525756 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:58 crc kubenswrapper[5104]: E0130 00:11:58.525931 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.590234 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.590278 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.590290 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.590305 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.590315 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.694966 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.695011 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.695020 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.695035 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.695044 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.806083 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.806429 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.806442 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.806459 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.806501 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.911271 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.911306 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.911315 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.911328 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.911337 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5104]: I0130 00:11:58.955171 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"b432b2ca1ed8c9669264887cdbc30313e3ac1d425d89e988a60500f4c73473a6"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.012638 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.012669 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.012679 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.012693 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.012709 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.115417 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.115673 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.115738 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.115803 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.115880 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.217363 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.217413 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.217430 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.217453 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.217470 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.319889 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.319927 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.319935 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.319957 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.319967 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.421452 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.421496 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.421507 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.421524 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.421536 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.522934 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.522980 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.522994 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.523010 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.523021 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.526016 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.624937 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.625003 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.625020 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.625041 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.625057 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.727258 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.727301 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.727312 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.727327 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.727337 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.829317 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.829372 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.829390 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.829413 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.829431 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.931530 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.931597 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.931617 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.931643 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.931660 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:59Z","lastTransitionTime":"2026-01-30T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.962394 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"413a6d7bdefe2d2a20224b4255157f83eed70be058eef1eb6dba7d80be5fc034"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.966410 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerStarted","Data":"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.966487 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerStarted","Data":"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.970251 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qnqx2" event={"ID":"16c76ea1-575d-492f-b64a-9116b99a5b28","Type":"ContainerStarted","Data":"a26427db9c63bd4c82af99d0af59bb9c0f4974a91f4d33bf79d57a8dd5697fa1"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.972826 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qpj6b" event={"ID":"27b37cd2-349b-4e9b-9665-06efa944384c","Type":"ContainerStarted","Data":"1dd6cf40cec30c12ab1f816a78a79337a6fdf18c146c540f95367c0aa25f519e"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.978268 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.981483 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12"} Jan 30 00:11:59 crc kubenswrapper[5104]: I0130 00:11:59.982261 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.021149 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.021122966 podStartE2EDuration="17.021122966s" podCreationTimestamp="2026-01-30 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.020164891 +0000 UTC m=+100.752504130" watchObservedRunningTime="2026-01-30 00:12:00.021122966 +0000 UTC m=+100.753462225" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.034065 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.034131 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.034150 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.034176 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.034194 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.121296 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.12126473 podStartE2EDuration="17.12126473s" podCreationTimestamp="2026-01-30 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.120843007 +0000 UTC m=+100.853182246" watchObservedRunningTime="2026-01-30 00:12:00.12126473 +0000 UTC m=+100.853603989" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.136240 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.136515 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.136528 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.136547 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.136560 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.159584 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bk79c" podStartSLOduration=77.159560463 podStartE2EDuration="1m17.159560463s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.159162362 +0000 UTC m=+100.891501591" watchObservedRunningTime="2026-01-30 00:12:00.159560463 +0000 UTC m=+100.891899722" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.225587 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.225569005 podStartE2EDuration="17.225569005s" podCreationTimestamp="2026-01-30 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.22316098 +0000 UTC m=+100.955500259" watchObservedRunningTime="2026-01-30 00:12:00.225569005 +0000 UTC m=+100.957908234" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.229496 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229590 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.229571443 +0000 UTC m=+116.961910672 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.229625 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229774 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.229780 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229901 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229918 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.22986684 +0000 UTC m=+116.962206079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229920 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229941 5104 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.229986 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.229974633 +0000 UTC m=+116.962313862 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.230031 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.230068 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230137 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230176 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.230167279 +0000 UTC m=+116.962506518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230201 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230219 5104 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230233 5104 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.230272 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.230262342 +0000 UTC m=+116.962601571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.238251 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.238289 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.238300 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.238313 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.238322 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.321466 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.321449953 podStartE2EDuration="16.321449953s" podCreationTimestamp="2026-01-30 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.321189606 +0000 UTC m=+101.053528825" watchObservedRunningTime="2026-01-30 00:12:00.321449953 +0000 UTC m=+101.053789172" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.331447 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.331627 5104 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.331710 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs podName:8549d8ab-08fd-4d10-b03e-d162d745184a nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.33169091 +0000 UTC m=+117.064030129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs") pod "network-metrics-daemon-gvjb6" (UID: "8549d8ab-08fd-4d10-b03e-d162d745184a") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.339993 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.340038 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.340048 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.340063 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.340074 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.348540 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" podStartSLOduration=76.348526964 podStartE2EDuration="1m16.348526964s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.348159724 +0000 UTC m=+101.080498933" watchObservedRunningTime="2026-01-30 00:12:00.348526964 +0000 UTC m=+101.080866183" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.369796 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.369778897 podStartE2EDuration="17.369778897s" podCreationTimestamp="2026-01-30 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.369014197 +0000 UTC m=+101.101353416" watchObservedRunningTime="2026-01-30 00:12:00.369778897 +0000 UTC m=+101.102118116" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.392529 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qnqx2" podStartSLOduration=77.39250632 podStartE2EDuration="1m17.39250632s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.392348326 +0000 UTC m=+101.124687545" watchObservedRunningTime="2026-01-30 00:12:00.39250632 +0000 UTC m=+101.124845559" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.393349 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qpj6b" podStartSLOduration=77.393335964 podStartE2EDuration="1m17.393335964s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:00.382034218 +0000 UTC m=+101.114373487" watchObservedRunningTime="2026-01-30 00:12:00.393335964 +0000 UTC m=+101.125675223" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.441600 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.441634 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.441646 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.441661 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.441675 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.528331 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.528574 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.528732 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.529393 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.529438 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.529515 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.529566 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:00 crc kubenswrapper[5104]: E0130 00:12:00.529686 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.546937 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.547017 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.547045 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.547078 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.547100 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.649159 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.649244 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.649263 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.649289 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.649308 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.751684 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.751732 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.751746 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.751763 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.751775 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.853668 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.853721 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.853733 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.853755 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.853765 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.955709 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.955759 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.955772 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.955789 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.955801 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:00Z","lastTransitionTime":"2026-01-30T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.987128 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c56874310ae71aec7caadf78a3b5e540aa5157f71ccaf492dc501c479e9d184f"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.989066 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.991071 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" exitCode=0 Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.991109 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} Jan 30 00:12:00 crc kubenswrapper[5104]: I0130 00:12:00.993263 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerStarted","Data":"41e8f72d558904b648adfe1c8edd0cd6cd501ad088d1c78299d42c9efa245f16"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.058783 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.058833 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.058861 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.058879 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.058896 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.161390 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.161568 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.161599 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.161685 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.161764 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.264128 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.264206 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.264222 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.264274 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.264289 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.366752 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.366801 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.366814 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.366830 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.366842 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.468884 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.468963 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.468987 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.469013 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.469031 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.571310 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.571355 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.571367 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.571388 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.571400 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.673039 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.673089 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.673101 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.673117 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.673128 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.774698 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.774741 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.774752 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.774769 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.774780 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.878201 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.878282 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.878300 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.878331 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.878358 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.980495 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.980546 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.980560 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.980580 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:01 crc kubenswrapper[5104]: I0130 00:12:01.980594 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:01Z","lastTransitionTime":"2026-01-30T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.000271 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"8ee36ac1c0ba3bf1ac12ce8b176283fd6cb74a3deaa48bf40935e9ac41aff8a2"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.008657 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.008756 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.008773 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.008787 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.009036 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.010432 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="41e8f72d558904b648adfe1c8edd0cd6cd501ad088d1c78299d42c9efa245f16" exitCode=0 Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.010494 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"41e8f72d558904b648adfe1c8edd0cd6cd501ad088d1c78299d42c9efa245f16"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.022013 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podStartSLOduration=79.021997506 podStartE2EDuration="1m19.021997506s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:02.021911033 +0000 UTC m=+102.754250342" watchObservedRunningTime="2026-01-30 00:12:02.021997506 +0000 UTC m=+102.754336725" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.084002 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.084075 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.084089 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.084113 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.084127 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.186547 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.186900 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.186944 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.186970 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.186983 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.289996 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.290047 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.290061 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.290079 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.290091 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.393236 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.393283 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.393294 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.393310 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.393319 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.496379 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.496442 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.496453 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.496472 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.496484 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.524780 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.524780 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.524874 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.524999 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:02 crc kubenswrapper[5104]: E0130 00:12:02.525005 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:02 crc kubenswrapper[5104]: E0130 00:12:02.525070 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:02 crc kubenswrapper[5104]: E0130 00:12:02.525161 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:02 crc kubenswrapper[5104]: E0130 00:12:02.525255 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.599492 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.599574 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.599593 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.599620 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.599638 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.601103 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.601168 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.601188 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.601566 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.601624 5104 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:02Z","lastTransitionTime":"2026-01-30T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.655277 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf"] Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.796290 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.799977 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.801085 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.802383 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.805364 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.960593 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.960634 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.960682 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.960713 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:02 crc kubenswrapper[5104]: I0130 00:12:02.960768 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.021544 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.025755 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerStarted","Data":"a6f47f6e64cf46ff5cb17da8bc7a0e3535b85742629c249a04f1f9994b7d80bf"} Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062609 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062693 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062733 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062791 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062905 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.062965 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.063083 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.063904 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.073385 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.081328 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e7a41c-768a-49cc-b215-9cec1f4e4cd8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-ks4pf\" (UID: \"73e7a41c-768a-49cc-b215-9cec1f4e4cd8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.119352 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" Jan 30 00:12:03 crc kubenswrapper[5104]: W0130 00:12:03.136704 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e7a41c_768a_49cc_b215_9cec1f4e4cd8.slice/crio-10c6866e46a38318a72d97162838a598d32445516bd5d76a01834fb39e0a6a4e WatchSource:0}: Error finding container 10c6866e46a38318a72d97162838a598d32445516bd5d76a01834fb39e0a6a4e: Status 404 returned error can't find the container with id 10c6866e46a38318a72d97162838a598d32445516bd5d76a01834fb39e0a6a4e Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.577945 5104 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 30 00:12:03 crc kubenswrapper[5104]: I0130 00:12:03.586072 5104 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:12:04 crc kubenswrapper[5104]: I0130 00:12:04.030547 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" event={"ID":"73e7a41c-768a-49cc-b215-9cec1f4e4cd8","Type":"ContainerStarted","Data":"10c6866e46a38318a72d97162838a598d32445516bd5d76a01834fb39e0a6a4e"} Jan 30 00:12:04 crc kubenswrapper[5104]: I0130 00:12:04.525079 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:04 crc kubenswrapper[5104]: E0130 00:12:04.525629 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:04 crc kubenswrapper[5104]: I0130 00:12:04.525128 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:04 crc kubenswrapper[5104]: I0130 00:12:04.525210 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5104]: E0130 00:12:04.525902 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:04 crc kubenswrapper[5104]: E0130 00:12:04.526084 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:04 crc kubenswrapper[5104]: I0130 00:12:04.526186 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:04 crc kubenswrapper[5104]: E0130 00:12:04.526295 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:05 crc kubenswrapper[5104]: I0130 00:12:05.040365 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" event={"ID":"73e7a41c-768a-49cc-b215-9cec1f4e4cd8","Type":"ContainerStarted","Data":"a7ed048feb027abff1ebd6d33a41a2b2bcf0f8ea52da2a08ce8db5a0c13a21e9"} Jan 30 00:12:05 crc kubenswrapper[5104]: I0130 00:12:05.046256 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} Jan 30 00:12:05 crc kubenswrapper[5104]: I0130 00:12:05.048776 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="a6f47f6e64cf46ff5cb17da8bc7a0e3535b85742629c249a04f1f9994b7d80bf" exitCode=0 Jan 30 00:12:05 crc kubenswrapper[5104]: I0130 00:12:05.048881 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"a6f47f6e64cf46ff5cb17da8bc7a0e3535b85742629c249a04f1f9994b7d80bf"} Jan 30 00:12:05 crc kubenswrapper[5104]: I0130 00:12:05.059094 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-ks4pf" podStartSLOduration=82.059058945 podStartE2EDuration="1m22.059058945s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:05.054505503 +0000 UTC m=+105.786844792" watchObservedRunningTime="2026-01-30 00:12:05.059058945 +0000 UTC m=+105.791398254" Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.054826 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="ae1819cc80ec9d5b27418af04c347101b269663697713ea668b19d1028f0dad9" exitCode=0 Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.054903 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"ae1819cc80ec9d5b27418af04c347101b269663697713ea668b19d1028f0dad9"} Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.524811 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:06 crc kubenswrapper[5104]: E0130 00:12:06.524959 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.525371 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:06 crc kubenswrapper[5104]: E0130 00:12:06.525545 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.525630 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:06 crc kubenswrapper[5104]: E0130 00:12:06.525742 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:06 crc kubenswrapper[5104]: I0130 00:12:06.525825 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:06 crc kubenswrapper[5104]: E0130 00:12:06.525935 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.063406 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerStarted","Data":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.064305 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.064372 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.064386 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.071578 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="ec77c754b5c29431ca4dc06e23ccaedad2cd2fd076fd54cbe76046a44c92e0e8" exitCode=0 Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.071627 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"ec77c754b5c29431ca4dc06e23ccaedad2cd2fd076fd54cbe76046a44c92e0e8"} Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.117786 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.119149 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:07 crc kubenswrapper[5104]: I0130 00:12:07.129870 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podStartSLOduration=84.129834151 podStartE2EDuration="1m24.129834151s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.103390778 +0000 UTC m=+107.835730037" watchObservedRunningTime="2026-01-30 00:12:07.129834151 +0000 UTC m=+107.862173370" Jan 30 00:12:08 crc kubenswrapper[5104]: I0130 00:12:08.082772 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerStarted","Data":"a4e7a7942b4f0b0094e73ac2154fa4d34792825c263358149e41e8d55f237a91"} Jan 30 00:12:08 crc kubenswrapper[5104]: I0130 00:12:08.525034 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:08 crc kubenswrapper[5104]: I0130 00:12:08.525200 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:08 crc kubenswrapper[5104]: E0130 00:12:08.525526 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:08 crc kubenswrapper[5104]: I0130 00:12:08.525247 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:08 crc kubenswrapper[5104]: I0130 00:12:08.525200 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:08 crc kubenswrapper[5104]: E0130 00:12:08.525644 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:08 crc kubenswrapper[5104]: E0130 00:12:08.525789 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:08 crc kubenswrapper[5104]: E0130 00:12:08.526048 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:09 crc kubenswrapper[5104]: I0130 00:12:09.824714 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gvjb6"] Jan 30 00:12:09 crc kubenswrapper[5104]: I0130 00:12:09.824909 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:09 crc kubenswrapper[5104]: E0130 00:12:09.825051 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:10 crc kubenswrapper[5104]: I0130 00:12:10.526395 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:10 crc kubenswrapper[5104]: E0130 00:12:10.526490 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:10 crc kubenswrapper[5104]: I0130 00:12:10.526791 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:10 crc kubenswrapper[5104]: E0130 00:12:10.526863 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:10 crc kubenswrapper[5104]: I0130 00:12:10.526893 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:10 crc kubenswrapper[5104]: E0130 00:12:10.526942 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:11 crc kubenswrapper[5104]: I0130 00:12:11.000056 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:11 crc kubenswrapper[5104]: I0130 00:12:11.095944 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="a4e7a7942b4f0b0094e73ac2154fa4d34792825c263358149e41e8d55f237a91" exitCode=0 Jan 30 00:12:11 crc kubenswrapper[5104]: I0130 00:12:11.095991 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"a4e7a7942b4f0b0094e73ac2154fa4d34792825c263358149e41e8d55f237a91"} Jan 30 00:12:11 crc kubenswrapper[5104]: I0130 00:12:11.525433 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:11 crc kubenswrapper[5104]: E0130 00:12:11.525666 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:12 crc kubenswrapper[5104]: I0130 00:12:12.102748 5104 generic.go:358] "Generic (PLEG): container finished" podID="fc38d06d-c458-429d-8dbf-43aab1cd4e57" containerID="9fc934d4e7d83dd06e7d83f8b4d228ef96eb1cb5867ba50f257dd5acba0a49da" exitCode=0 Jan 30 00:12:12 crc kubenswrapper[5104]: I0130 00:12:12.102833 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerDied","Data":"9fc934d4e7d83dd06e7d83f8b4d228ef96eb1cb5867ba50f257dd5acba0a49da"} Jan 30 00:12:12 crc kubenswrapper[5104]: I0130 00:12:12.525576 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:12 crc kubenswrapper[5104]: E0130 00:12:12.525716 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:12 crc kubenswrapper[5104]: I0130 00:12:12.525986 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:12 crc kubenswrapper[5104]: E0130 00:12:12.526091 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:12 crc kubenswrapper[5104]: I0130 00:12:12.526144 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:12 crc kubenswrapper[5104]: E0130 00:12:12.526306 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:13 crc kubenswrapper[5104]: I0130 00:12:13.110632 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" event={"ID":"fc38d06d-c458-429d-8dbf-43aab1cd4e57","Type":"ContainerStarted","Data":"dbc41532c19623b1583429fabaea4a5391dae0b7150d295fe0f0d02b0c7a41e5"} Jan 30 00:12:13 crc kubenswrapper[5104]: I0130 00:12:13.133004 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9mfdf" podStartSLOduration=90.132978875 podStartE2EDuration="1m30.132978875s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:13.132305307 +0000 UTC m=+113.864644596" watchObservedRunningTime="2026-01-30 00:12:13.132978875 +0000 UTC m=+113.865318104" Jan 30 00:12:13 crc kubenswrapper[5104]: I0130 00:12:13.524727 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:13 crc kubenswrapper[5104]: E0130 00:12:13.524955 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvjb6" podUID="8549d8ab-08fd-4d10-b03e-d162d745184a" Jan 30 00:12:14 crc kubenswrapper[5104]: I0130 00:12:14.528845 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:14 crc kubenswrapper[5104]: E0130 00:12:14.528989 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:14 crc kubenswrapper[5104]: I0130 00:12:14.529025 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:14 crc kubenswrapper[5104]: E0130 00:12:14.529207 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:14 crc kubenswrapper[5104]: I0130 00:12:14.529641 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:14 crc kubenswrapper[5104]: E0130 00:12:14.529765 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.056221 5104 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.056696 5104 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.103497 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-mh68h"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.167309 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-d9wqk"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.200138 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.200267 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.206804 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.211677 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.212004 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.212085 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.212412 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.213512 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.227388 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhdf\" (UniqueName: \"kubernetes.io/projected/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-kube-api-access-xwhdf\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.227484 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-images\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.227543 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-config\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.227566 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.330719 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-serving-cert\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.330752 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-etcd-client\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.330866 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-config\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.330945 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-encryption-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.330983 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331004 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331026 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-audit-dir\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331042 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331061 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331075 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/1bef0b46-9def-441e-88e8-f481e45026da-kube-api-access-2dbv9\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331094 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331109 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-audit\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331128 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-image-import-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331171 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-images\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331359 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhdf\" (UniqueName: \"kubernetes.io/projected/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-kube-api-access-xwhdf\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.331625 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-config\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.332041 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-images\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.346556 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.351002 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhdf\" (UniqueName: \"kubernetes.io/projected/ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88-kube-api-access-xwhdf\") pod \"machine-api-operator-755bb95488-mh68h\" (UID: \"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88\") " pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.372210 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29495520-4kxpc"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.372406 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.372361 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.379700 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.379934 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.380228 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.380422 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.380557 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.380686 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.380882 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.384167 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.384447 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.384632 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.384902 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.385097 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.385674 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.385753 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.386540 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.395837 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.406636 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.413650 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432683 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432732 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432759 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-audit-dir\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432779 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432800 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432819 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/1bef0b46-9def-441e-88e8-f481e45026da-kube-api-access-2dbv9\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432840 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-audit\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432886 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-image-import-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432913 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432955 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.432977 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms24b\" (UniqueName: \"kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433114 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433189 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433264 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-serving-cert\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433286 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-etcd-client\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433324 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433353 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.433403 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-encryption-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.434095 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v56dx"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.434831 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-image-import-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.435387 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.435789 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bef0b46-9def-441e-88e8-f481e45026da-audit-dir\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.435826 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-audit\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.436304 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.436550 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bef0b46-9def-441e-88e8-f481e45026da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.437374 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.437448 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.439617 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.439679 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.440012 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-encryption-config\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.441940 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-etcd-client\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.444543 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bef0b46-9def-441e-88e8-f481e45026da-serving-cert\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.453324 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/1bef0b46-9def-441e-88e8-f481e45026da-kube-api-access-2dbv9\") pod \"apiserver-9ddfb9f55-d9wqk\" (UID: \"1bef0b46-9def-441e-88e8-f481e45026da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.504418 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xkl2m"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.504594 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.505180 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.507177 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.507353 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508102 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508151 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508245 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508274 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508275 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508353 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508396 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508502 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.508831 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.511273 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.511329 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.512383 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.512496 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.526902 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.534736 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpqh\" (UniqueName: \"kubernetes.io/projected/512ba09a-c537-4c10-86c4-6226498ce0e0-kube-api-access-scpqh\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.534799 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/512ba09a-c537-4c10-86c4-6226498ce0e0-serving-cert\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.534871 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.535377 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.535473 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536074 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ms24b\" (UniqueName: \"kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536161 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536239 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536261 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536320 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglfk\" (UniqueName: \"kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536733 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.537042 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.537376 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.536375 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/512ba09a-c537-4c10-86c4-6226498ce0e0-available-featuregates\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.539124 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.541138 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.554531 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms24b\" (UniqueName: \"kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b\") pod \"controller-manager-65b6cccf98-l7gdh\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.580630 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-xs5zv"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.581057 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.583809 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584012 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584117 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584442 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584601 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584627 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584740 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584800 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.584942 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.585250 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.585427 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.586705 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.592013 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.597251 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.605822 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639038 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-tmp-dir\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639102 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8215-e1c1-4bb9-881e-d2570dead87e-config\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639158 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8215-e1c1-4bb9-881e-d2570dead87e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639324 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mglfk\" (UniqueName: \"kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639394 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kql4\" (UniqueName: \"kubernetes.io/projected/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-kube-api-access-8kql4\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639425 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/512ba09a-c537-4c10-86c4-6226498ce0e0-available-featuregates\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639451 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scpqh\" (UniqueName: \"kubernetes.io/projected/512ba09a-c537-4c10-86c4-6226498ce0e0-kube-api-access-scpqh\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639473 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdw8\" (UniqueName: \"kubernetes.io/projected/211c8215-e1c1-4bb9-881e-d2570dead87e-kube-api-access-dhdw8\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639495 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/512ba09a-c537-4c10-86c4-6226498ce0e0-serving-cert\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639544 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-metrics-tls\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.639574 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.640310 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/512ba09a-c537-4c10-86c4-6226498ce0e0-available-featuregates\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.640346 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.646249 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/512ba09a-c537-4c10-86c4-6226498ce0e0-serving-cert\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.647537 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.657469 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpqh\" (UniqueName: \"kubernetes.io/projected/512ba09a-c537-4c10-86c4-6226498ce0e0-kube-api-access-scpqh\") pod \"openshift-config-operator-5777786469-v56dx\" (UID: \"512ba09a-c537-4c10-86c4-6226498ce0e0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.663979 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.664053 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.664087 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.664800 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669226 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669225 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglfk\" (UniqueName: \"kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk\") pod \"image-pruner-29495520-4kxpc\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669291 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669471 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669636 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669694 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.669704 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.670274 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.670319 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.670380 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.670388 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.672054 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.672207 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.682261 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.692456 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.700309 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.708653 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.710331 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.712530 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.714061 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741108 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdw8\" (UniqueName: \"kubernetes.io/projected/211c8215-e1c1-4bb9-881e-d2570dead87e-kube-api-access-dhdw8\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741149 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741169 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-default-certificate\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741199 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-metrics-tls\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741217 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741233 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741264 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741278 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741295 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741312 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741327 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b96d7cb-4106-4adb-baab-92ec201306e2-service-ca-bundle\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741348 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741364 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741380 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741395 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741409 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnt95\" (UniqueName: \"kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741424 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741439 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741454 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xfb\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741477 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741501 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741518 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-tmp-dir\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741535 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-stats-auth\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741556 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8215-e1c1-4bb9-881e-d2570dead87e-config\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741571 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741586 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-metrics-certs\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741604 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741622 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk6c\" (UniqueName: \"kubernetes.io/projected/5b96d7cb-4106-4adb-baab-92ec201306e2-kube-api-access-mpk6c\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741648 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8215-e1c1-4bb9-881e-d2570dead87e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741684 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8kql4\" (UniqueName: \"kubernetes.io/projected/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-kube-api-access-8kql4\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741702 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741719 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.741734 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.743672 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-tmp-dir\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: E0130 00:12:15.744779 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.244629912 +0000 UTC m=+116.976969131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.746347 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-metrics-tls\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.747658 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211c8215-e1c1-4bb9-881e-d2570dead87e-config\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.758086 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/211c8215-e1c1-4bb9-881e-d2570dead87e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.759443 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kql4\" (UniqueName: \"kubernetes.io/projected/5c4af38b-fd2a-49b5-be40-cbd25eba4bde-kube-api-access-8kql4\") pod \"dns-operator-799b87ffcd-xkl2m\" (UID: \"5c4af38b-fd2a-49b5-be40-cbd25eba4bde\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.761336 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdw8\" (UniqueName: \"kubernetes.io/projected/211c8215-e1c1-4bb9-881e-d2570dead87e-kube-api-access-dhdw8\") pod \"openshift-apiserver-operator-846cbfc458-d6sw5\" (UID: \"211c8215-e1c1-4bb9-881e-d2570dead87e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.767248 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.818351 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858342 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858502 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858531 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858555 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858584 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858607 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b96d7cb-4106-4adb-baab-92ec201306e2-service-ca-bundle\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858654 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858675 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858700 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858722 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858746 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wnt95\" (UniqueName: \"kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858768 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858789 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858812 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8xfb\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858839 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnfz\" (UniqueName: \"kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858902 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858930 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-stats-auth\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858958 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.858978 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-metrics-certs\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859005 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859030 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk6c\" (UniqueName: \"kubernetes.io/projected/5b96d7cb-4106-4adb-baab-92ec201306e2-kube-api-access-mpk6c\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859087 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859116 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859139 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859159 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859193 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859215 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-default-certificate\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859237 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859278 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859301 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859329 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.859352 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: E0130 00:12:15.859482 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.35946317 +0000 UTC m=+117.091802399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.862243 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.862650 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.864068 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.866735 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b96d7cb-4106-4adb-baab-92ec201306e2-service-ca-bundle\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.867935 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.867962 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.868298 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.868739 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.868992 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.869608 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.869882 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.870426 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.871142 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.871593 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-default-certificate\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.872182 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.872522 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-metrics-certs\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.872792 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.882834 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b96d7cb-4106-4adb-baab-92ec201306e2-stats-auth\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.883041 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.883472 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.888882 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk6c\" (UniqueName: \"kubernetes.io/projected/5b96d7cb-4106-4adb-baab-92ec201306e2-kube-api-access-mpk6c\") pod \"router-default-68cf44c8b8-xs5zv\" (UID: \"5b96d7cb-4106-4adb-baab-92ec201306e2\") " pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.888905 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8xfb\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.889582 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.893141 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.895439 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.895521 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.896119 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnt95\" (UniqueName: \"kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95\") pod \"oauth-openshift-66458b6674-g766x\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.902232 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.916274 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.917663 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r9m28"] Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.918481 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.918512 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.921151 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.921547 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.921708 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.921828 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.922006 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.922244 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.922401 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.922782 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961643 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961696 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961736 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961759 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961815 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnfz\" (UniqueName: \"kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.961843 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:15 crc kubenswrapper[5104]: E0130 00:12:15.962373 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.462357299 +0000 UTC m=+117.194696518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.963501 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.964238 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.964257 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.968679 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.982575 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:15 crc kubenswrapper[5104]: I0130 00:12:15.985589 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnfz\" (UniqueName: \"kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz\") pod \"route-controller-manager-776cdc94d6-c5tsr\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.063398 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.063605 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/656c26bb-2611-460d-b115-ad18f57cc138-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.064246 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.564225869 +0000 UTC m=+117.296565108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064278 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064330 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-tmp\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064380 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064402 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656c26bb-2611-460d-b115-ad18f57cc138-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064426 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.064451 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww869\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-kube-api-access-ww869\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.064944 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.564924648 +0000 UTC m=+117.297263867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165107 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.165205 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.665185254 +0000 UTC m=+117.397524473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165337 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/656c26bb-2611-460d-b115-ad18f57cc138-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165409 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165443 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-tmp\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165474 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165496 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656c26bb-2611-460d-b115-ad18f57cc138-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.165518 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.165921 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.665900744 +0000 UTC m=+117.398239963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.169926 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/656c26bb-2611-460d-b115-ad18f57cc138-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.170248 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ww869\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-kube-api-access-ww869\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.170507 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.170803 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/656c26bb-2611-460d-b115-ad18f57cc138-tmp\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.174975 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/656c26bb-2611-460d-b115-ad18f57cc138-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.181914 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.187448 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww869\" (UniqueName: \"kubernetes.io/projected/656c26bb-2611-460d-b115-ad18f57cc138-kube-api-access-ww869\") pod \"cluster-image-registry-operator-86c45576b9-jd2mk\" (UID: \"656c26bb-2611-460d-b115-ad18f57cc138\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.238782 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.247908 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271304 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271434 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.271449 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.771427722 +0000 UTC m=+117.503766941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271498 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271527 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271548 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.271578 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.271712 5104 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.271765 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:48.271756271 +0000 UTC m=+149.004095490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.272031 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.772010838 +0000 UTC m=+117.504350067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.272088 5104 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.272384 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:48.272359077 +0000 UTC m=+149.004698296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.276701 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.287316 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.300525 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:16 crc kubenswrapper[5104]: W0130 00:12:16.339491 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc47b4509_0bb1_4360_9db3_29ebfcd734e3.slice/crio-8765d8b13fbc965d68b29dcd8d2dfd68578d3842f074689b719b57978f5048c4 WatchSource:0}: Error finding container 8765d8b13fbc965d68b29dcd8d2dfd68578d3842f074689b719b57978f5048c4: Status 404 returned error can't find the container with id 8765d8b13fbc965d68b29dcd8d2dfd68578d3842f074689b719b57978f5048c4 Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.372648 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.372846 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.373710 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.873676362 +0000 UTC m=+117.606015591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.378182 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8549d8ab-08fd-4d10-b03e-d162d745184a-metrics-certs\") pod \"network-metrics-daemon-gvjb6\" (UID: \"8549d8ab-08fd-4d10-b03e-d162d745184a\") " pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.473632 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.473966 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.973952719 +0000 UTC m=+117.706291938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: W0130 00:12:16.490123 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-7f384abfd934c466f6b6b04c2497737398978acca6a570cb8aa751cbe887602f WatchSource:0}: Error finding container 7f384abfd934c466f6b6b04c2497737398978acca6a570cb8aa751cbe887602f: Status 404 returned error can't find the container with id 7f384abfd934c466f6b6b04c2497737398978acca6a570cb8aa751cbe887602f Jan 30 00:12:16 crc kubenswrapper[5104]: W0130 00:12:16.538166 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff629e62_b58e_4d85_aa96_fbc1845b304b.slice/crio-eb7550f4e431003bb67113687f3142c13f17529aa85082ce1bb3423350829ff7 WatchSource:0}: Error finding container eb7550f4e431003bb67113687f3142c13f17529aa85082ce1bb3423350829ff7: Status 404 returned error can't find the container with id eb7550f4e431003bb67113687f3142c13f17529aa85082ce1bb3423350829ff7 Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.574658 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.574781 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.07475561 +0000 UTC m=+117.807094829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.575059 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.575328 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.075314885 +0000 UTC m=+117.807654104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.676443 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.676602 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.176572088 +0000 UTC m=+117.908911347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.676752 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.677241 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.177223686 +0000 UTC m=+117.909562945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.679015 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvjb6" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.683666 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9"] Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.683834 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.691435 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.691665 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.691765 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.691921 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.691998 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.692034 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.692095 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.693000 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.777885 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.778264 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.278243053 +0000 UTC m=+118.010582272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778408 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-config\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778445 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-serving-cert\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778502 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778546 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778569 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d158deef-46a2-4f4b-bd06-fce37341fa01-tmp-dir\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778653 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kpl7\" (UniqueName: \"kubernetes.io/projected/d158deef-46a2-4f4b-bd06-fce37341fa01-kube-api-access-4kpl7\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778719 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-client\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.778843 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.779014 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.279003003 +0000 UTC m=+118.011342232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.879534 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.879741 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.379697852 +0000 UTC m=+118.112037071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.879824 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kpl7\" (UniqueName: \"kubernetes.io/projected/d158deef-46a2-4f4b-bd06-fce37341fa01-kube-api-access-4kpl7\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.879897 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-client\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880035 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880167 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-config\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880296 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-serving-cert\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880323 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880394 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.880658 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d158deef-46a2-4f4b-bd06-fce37341fa01-tmp-dir\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.881667 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.381654445 +0000 UTC m=+118.113993664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.884631 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d158deef-46a2-4f4b-bd06-fce37341fa01-tmp-dir\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.885744 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.886044 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-ca\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.886487 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158deef-46a2-4f4b-bd06-fce37341fa01-config\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.888476 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-etcd-client\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.888522 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d158deef-46a2-4f4b-bd06-fce37341fa01-serving-cert\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.901676 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kpl7\" (UniqueName: \"kubernetes.io/projected/d158deef-46a2-4f4b-bd06-fce37341fa01-kube-api-access-4kpl7\") pod \"etcd-operator-69b85846b6-r9m28\" (UID: \"d158deef-46a2-4f4b-bd06-fce37341fa01\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:16 crc kubenswrapper[5104]: W0130 00:12:16.913683 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8549d8ab_08fd_4d10_b03e_d162d745184a.slice/crio-003e6b0220d7dcb350f8f94358fbb81384f523230a0a32f1ffe82961a150677c WatchSource:0}: Error finding container 003e6b0220d7dcb350f8f94358fbb81384f523230a0a32f1ffe82961a150677c: Status 404 returned error can't find the container with id 003e6b0220d7dcb350f8f94358fbb81384f523230a0a32f1ffe82961a150677c Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.961588 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" event={"ID":"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88","Type":"ContainerStarted","Data":"d6a478ffbabd53e48cb20a8727bb8002d94b553e086062989fcb717f4e0ff0de"} Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.961684 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-675xg"] Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.961783 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.963394 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.965105 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.965278 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.966247 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.967218 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.967301 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.967509 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.981209 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5104]: E0130 00:12:16.981404 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.481363565 +0000 UTC m=+118.213702824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.993611 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" event={"ID":"5b96d7cb-4106-4adb-baab-92ec201306e2","Type":"ContainerStarted","Data":"e2a48bc592b6d428c0cb7df8eba49753c972afff137c55f4e7a90312cbf41b58"} Jan 30 00:12:16 crc kubenswrapper[5104]: I0130 00:12:16.993665 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-kzx6r"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.993735 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.994377 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.996483 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.998890 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.998980 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.999192 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.999253 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.999460 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.999676 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:16.999909 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.000211 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.009783 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-g4wlb"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.010243 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.011476 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.013069 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.013261 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.013506 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.040521 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083360 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-serving-cert\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083419 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-dir\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083517 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cszx\" (UniqueName: \"kubernetes.io/projected/06fbf10a-e423-4033-b4cb-ff77c12973d7-kube-api-access-4cszx\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083568 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083629 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5e98104-79f7-4fb8-b554-f705833000a1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083793 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-encryption-config\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083873 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e98104-79f7-4fb8-b554-f705833000a1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083909 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e98104-79f7-4fb8-b554-f705833000a1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083949 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-policies\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.083980 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-client\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.084016 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5e98104-79f7-4fb8-b554-f705833000a1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.084063 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.084100 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.084610 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.584590102 +0000 UTC m=+118.316929351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.105596 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.126275 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.126459 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.126477 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.126296 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.127282 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.127305 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.130326 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.151029 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-m6rzk"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184611 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184791 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-config\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184839 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-serving-cert\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184880 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-dir\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184905 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4cszx\" (UniqueName: \"kubernetes.io/projected/06fbf10a-e423-4033-b4cb-ff77c12973d7-kube-api-access-4cszx\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184931 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.184955 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwd4s\" (UniqueName: \"kubernetes.io/projected/a92e1ebb-86ac-4456-873b-ce575e9cda12-kube-api-access-xwd4s\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.185006 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5e98104-79f7-4fb8-b554-f705833000a1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.185047 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-dir\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.185055 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-encryption-config\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.185149 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.685128906 +0000 UTC m=+118.417468125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186073 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e98104-79f7-4fb8-b554-f705833000a1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186121 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e98104-79f7-4fb8-b554-f705833000a1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186155 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-policies\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186169 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-client\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186187 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5e98104-79f7-4fb8-b554-f705833000a1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186222 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186247 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186278 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjv6\" (UniqueName: \"kubernetes.io/projected/6fd43d75-51fe-42d6-9f2a-adbe6045f25c-kube-api-access-2fjv6\") pod \"downloads-747b44746d-kzx6r\" (UID: \"6fd43d75-51fe-42d6-9f2a-adbe6045f25c\") " pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186309 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5e98104-79f7-4fb8-b554-f705833000a1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186367 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a92e1ebb-86ac-4456-873b-ce575e9cda12-serving-cert\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.186429 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-trusted-ca\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.187575 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.687561952 +0000 UTC m=+118.419901171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.187769 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-audit-policies\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.187778 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e98104-79f7-4fb8-b554-f705833000a1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.188187 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.194222 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5e98104-79f7-4fb8-b554-f705833000a1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.199054 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.211487 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-etcd-client\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.213953 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-serving-cert\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.216388 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cszx\" (UniqueName: \"kubernetes.io/projected/06fbf10a-e423-4033-b4cb-ff77c12973d7-kube-api-access-4cszx\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.216959 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06fbf10a-e423-4033-b4cb-ff77c12973d7-encryption-config\") pod \"apiserver-8596bd845d-675xg\" (UID: \"06fbf10a-e423-4033-b4cb-ff77c12973d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.219070 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5e98104-79f7-4fb8-b554-f705833000a1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jn5m9\" (UID: \"d5e98104-79f7-4fb8-b554-f705833000a1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233730 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"7f384abfd934c466f6b6b04c2497737398978acca6a570cb8aa751cbe887602f"} Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233782 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" event={"ID":"211c8215-e1c1-4bb9-881e-d2570dead87e","Type":"ContainerStarted","Data":"e0968182f019b8860e4bfea1c34642647b1f03f1bbf3daad3ab443893dd21e9b"} Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233803 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-4kxpc" event={"ID":"302f79c1-a693-494c-9a1b-360a59d439f5","Type":"ContainerStarted","Data":"4d9873237ec687e3e157e08b63838f7772090bfac9fb751ba58e8bf6f0053cf0"} Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233817 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" event={"ID":"df0257f9-bd1a-4915-8db4-aec4ffda4826","Type":"ContainerStarted","Data":"b33358b75e0aa79bf6d317db840bb26bc1782e576f6a9b2cc11b1f35e34063c2"} Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233832 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" event={"ID":"656c26bb-2611-460d-b115-ad18f57cc138","Type":"ContainerStarted","Data":"f9edf480ce603a17240faf31cfa97bd3bf579401cc85a680b888d4d7553fe464"} Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.233866 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.235269 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.238952 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.239942 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.240093 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.240281 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.287634 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.287941 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.787725405 +0000 UTC m=+118.520064624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.288241 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.288291 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwd4s\" (UniqueName: \"kubernetes.io/projected/a92e1ebb-86ac-4456-873b-ce575e9cda12-kube-api-access-xwd4s\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.288687 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/de15ba83-bde1-43f2-b924-65926e8a4565-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.288814 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.291138 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjv6\" (UniqueName: \"kubernetes.io/projected/6fd43d75-51fe-42d6-9f2a-adbe6045f25c-kube-api-access-2fjv6\") pod \"downloads-747b44746d-kzx6r\" (UID: \"6fd43d75-51fe-42d6-9f2a-adbe6045f25c\") " pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.291771 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.791750904 +0000 UTC m=+118.524090123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.298150 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a92e1ebb-86ac-4456-873b-ce575e9cda12-serving-cert\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.298876 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55svf\" (UniqueName: \"kubernetes.io/projected/de15ba83-bde1-43f2-b924-65926e8a4565-kube-api-access-55svf\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.298924 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-trusted-ca\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.298958 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-config\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.299695 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-config\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.300155 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a92e1ebb-86ac-4456-873b-ce575e9cda12-trusted-ca\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.301663 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a92e1ebb-86ac-4456-873b-ce575e9cda12-serving-cert\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.309518 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwd4s\" (UniqueName: \"kubernetes.io/projected/a92e1ebb-86ac-4456-873b-ce575e9cda12-kube-api-access-xwd4s\") pod \"console-operator-67c89758df-g4wlb\" (UID: \"a92e1ebb-86ac-4456-873b-ce575e9cda12\") " pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.315158 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjv6\" (UniqueName: \"kubernetes.io/projected/6fd43d75-51fe-42d6-9f2a-adbe6045f25c-kube-api-access-2fjv6\") pod \"downloads-747b44746d-kzx6r\" (UID: \"6fd43d75-51fe-42d6-9f2a-adbe6045f25c\") " pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.320384 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.321397 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.321886 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.325779 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.326324 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.326582 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.327076 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.329741 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.329999 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.330167 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.330486 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.330553 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.330915 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.338271 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.358816 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-26b50ed65bb557982949793e9975ac545e1b9c5d442978287995e9e40aa3d446 WatchSource:0}: Error finding container 26b50ed65bb557982949793e9975ac545e1b9c5d442978287995e9e40aa3d446: Status 404 returned error can't find the container with id 26b50ed65bb557982949793e9975ac545e1b9c5d442978287995e9e40aa3d446 Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.361269 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.396769 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.399701 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.399814 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-oauth-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.399940 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.899832001 +0000 UTC m=+118.632171220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400008 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wshg\" (UniqueName: \"kubernetes.io/projected/a1f8c00b-3459-4b15-ab8c-52407669c50a-kube-api-access-2wshg\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400029 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5aaf4d-9812-4773-bcd9-a6901952e242-serving-cert\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400054 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-service-ca\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400074 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5aaf4d-9812-4773-bcd9-a6901952e242-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400146 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/de15ba83-bde1-43f2-b924-65926e8a4565-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400181 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5aaf4d-9812-4773-bcd9-a6901952e242-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400221 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400315 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400427 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-oauth-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400466 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-trusted-ca-bundle\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400492 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400525 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55svf\" (UniqueName: \"kubernetes.io/projected/de15ba83-bde1-43f2-b924-65926e8a4565-kube-api-access-55svf\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.400606 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5aaf4d-9812-4773-bcd9-a6901952e242-config\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.400620 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.900603532 +0000 UTC m=+118.632942751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.410168 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/de15ba83-bde1-43f2-b924-65926e8a4565-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.418108 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55svf\" (UniqueName: \"kubernetes.io/projected/de15ba83-bde1-43f2-b924-65926e8a4565-kube-api-access-55svf\") pod \"cluster-samples-operator-6b564684c8-qtds2\" (UID: \"de15ba83-bde1-43f2-b924-65926e8a4565\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.461579 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.501410 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.501530 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.001503896 +0000 UTC m=+118.733843115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502018 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502077 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5aaf4d-9812-4773-bcd9-a6901952e242-config\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502112 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-oauth-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502136 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wshg\" (UniqueName: \"kubernetes.io/projected/a1f8c00b-3459-4b15-ab8c-52407669c50a-kube-api-access-2wshg\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502156 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5aaf4d-9812-4773-bcd9-a6901952e242-serving-cert\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502184 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-service-ca\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502207 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5aaf4d-9812-4773-bcd9-a6901952e242-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502378 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5aaf4d-9812-4773-bcd9-a6901952e242-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502416 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502488 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502541 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-oauth-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502578 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-trusted-ca-bundle\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.502829 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.002817781 +0000 UTC m=+118.735157000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.502894 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5aaf4d-9812-4773-bcd9-a6901952e242-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.503040 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5aaf4d-9812-4773-bcd9-a6901952e242-config\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.503885 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-service-ca\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.504175 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-oauth-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.504175 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.504981 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f8c00b-3459-4b15-ab8c-52407669c50a-trusted-ca-bundle\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.507728 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-serving-cert\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.507910 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5aaf4d-9812-4773-bcd9-a6901952e242-serving-cert\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.508692 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1f8c00b-3459-4b15-ab8c-52407669c50a-console-oauth-config\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.523506 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5aaf4d-9812-4773-bcd9-a6901952e242-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pzv69\" (UID: \"ff5aaf4d-9812-4773-bcd9-a6901952e242\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.523764 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wshg\" (UniqueName: \"kubernetes.io/projected/a1f8c00b-3459-4b15-ab8c-52407669c50a-kube-api-access-2wshg\") pod \"console-64d44f6ddf-m6rzk\" (UID: \"a1f8c00b-3459-4b15-ab8c-52407669c50a\") " pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.531742 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5e98104_79f7_4fb8_b554_f705833000a1.slice/crio-a3870c70201257285bebe6cb1743ba0d37d1fe229064bf433cf7defed65bc525 WatchSource:0}: Error finding container a3870c70201257285bebe6cb1743ba0d37d1fe229064bf433cf7defed65bc525: Status 404 returned error can't find the container with id a3870c70201257285bebe6cb1743ba0d37d1fe229064bf433cf7defed65bc525 Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.555406 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.555514 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06fbf10a_e423_4033_b4cb_ff77c12973d7.slice/crio-d35ca8db59a21582f1ee1ca527eee8274976c636aca933676ce10353d36429ea WatchSource:0}: Error finding container d35ca8db59a21582f1ee1ca527eee8274976c636aca933676ce10353d36429ea: Status 404 returned error can't find the container with id d35ca8db59a21582f1ee1ca527eee8274976c636aca933676ce10353d36429ea Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.603543 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.603836 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.103802297 +0000 UTC m=+118.836141516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.630263 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.630457 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.634012 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.634459 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.634560 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.634789 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.635274 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.686511 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda92e1ebb_86ac_4456_873b_ce575e9cda12.slice/crio-2d7bd8eb2b37f01baf28b5ebf75bc74d0191dcf1595af0d95f4b2626e3ac6b27 WatchSource:0}: Error finding container 2d7bd8eb2b37f01baf28b5ebf75bc74d0191dcf1595af0d95f4b2626e3ac6b27: Status 404 returned error can't find the container with id 2d7bd8eb2b37f01baf28b5ebf75bc74d0191dcf1595af0d95f4b2626e3ac6b27 Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.690955 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.697038 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.704521 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.704561 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad64408-8e74-460b-b652-f12f5920cd21-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.704586 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad64408-8e74-460b-b652-f12f5920cd21-config\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.704602 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad64408-8e74-460b-b652-f12f5920cd21-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.704618 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lff4c\" (UniqueName: \"kubernetes.io/projected/bad64408-8e74-460b-b652-f12f5920cd21-kube-api-access-lff4c\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.704966 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.204952397 +0000 UTC m=+118.937291616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.805447 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.805910 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad64408-8e74-460b-b652-f12f5920cd21-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.805952 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad64408-8e74-460b-b652-f12f5920cd21-config\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.805976 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad64408-8e74-460b-b652-f12f5920cd21-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.805999 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lff4c\" (UniqueName: \"kubernetes.io/projected/bad64408-8e74-460b-b652-f12f5920cd21-kube-api-access-lff4c\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.806035 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.306002634 +0000 UTC m=+119.038341853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.806937 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad64408-8e74-460b-b652-f12f5920cd21-config\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.807166 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bad64408-8e74-460b-b652-f12f5920cd21-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.813239 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad64408-8e74-460b-b652-f12f5920cd21-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.830991 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lff4c\" (UniqueName: \"kubernetes.io/projected/bad64408-8e74-460b-b652-f12f5920cd21-kube-api-access-lff4c\") pod \"openshift-controller-manager-operator-686468bdd5-7pgw4\" (UID: \"bad64408-8e74-460b-b652-f12f5920cd21\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.898068 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6"] Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.898263 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.902060 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.902263 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.902406 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.902805 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.907071 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:17 crc kubenswrapper[5104]: E0130 00:12:17.907796 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.407780562 +0000 UTC m=+119.140119791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.916180 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:12:17 crc kubenswrapper[5104]: I0130 00:12:17.955528 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.958824 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f8c00b_3459_4b15_ab8c_52407669c50a.slice/crio-65ad0a1ab31b9a95575c9d86939b50ee0156a77d143eb293cea5bb02d65b82a3 WatchSource:0}: Error finding container 65ad0a1ab31b9a95575c9d86939b50ee0156a77d143eb293cea5bb02d65b82a3: Status 404 returned error can't find the container with id 65ad0a1ab31b9a95575c9d86939b50ee0156a77d143eb293cea5bb02d65b82a3 Jan 30 00:12:17 crc kubenswrapper[5104]: W0130 00:12:17.992886 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5aaf4d_9812_4773_bcd9_a6901952e242.slice/crio-a6930d457824529cf94e22e6fb6bf202c249abc09bd79c549a303d2360c3ca81 WatchSource:0}: Error finding container a6930d457824529cf94e22e6fb6bf202c249abc09bd79c549a303d2360c3ca81: Status 404 returned error can't find the container with id a6930d457824529cf94e22e6fb6bf202c249abc09bd79c549a303d2360c3ca81 Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.008326 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.008517 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.008546 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/994594a7-ccc0-4f06-84ca-89f4e3561a2f-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.008585 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlp8\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-kube-api-access-5rlp8\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.008609 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/994594a7-ccc0-4f06-84ca-89f4e3561a2f-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.008699 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.508685066 +0000 UTC m=+119.241024275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.110136 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlp8\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-kube-api-access-5rlp8\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.110183 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/994594a7-ccc0-4f06-84ca-89f4e3561a2f-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.110230 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.110260 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.110276 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/994594a7-ccc0-4f06-84ca-89f4e3561a2f-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.110844 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.610822883 +0000 UTC m=+119.343162102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.111987 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/994594a7-ccc0-4f06-84ca-89f4e3561a2f-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.124116 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/994594a7-ccc0-4f06-84ca-89f4e3561a2f-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.126531 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlp8\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-kube-api-access-5rlp8\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.133687 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/994594a7-ccc0-4f06-84ca-89f4e3561a2f-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vklm2\" (UID: \"994594a7-ccc0-4f06-84ca-89f4e3561a2f\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.169537 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.169778 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.172272 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.172546 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.173063 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:12:18 crc kubenswrapper[5104]: W0130 00:12:18.206578 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad64408_8e74_460b_b652_f12f5920cd21.slice/crio-362fc795e929f720465474193cfab9403ca2b891381a7d7211411655f95fc86b WatchSource:0}: Error finding container 362fc795e929f720465474193cfab9403ca2b891381a7d7211411655f95fc86b: Status 404 returned error can't find the container with id 362fc795e929f720465474193cfab9403ca2b891381a7d7211411655f95fc86b Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.210924 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.211078 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.711050628 +0000 UTC m=+119.443389837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.211291 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.211707 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.711700526 +0000 UTC m=+119.444039745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.241685 5104 generic.go:358] "Generic (PLEG): container finished" podID="1bef0b46-9def-441e-88e8-f481e45026da" containerID="338e72eeb0ce8824e99cf3a4f499b4422ce36ab98718339666e4257f685ae5fa" exitCode=0 Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274427 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" event={"ID":"8549d8ab-08fd-4d10-b03e-d162d745184a","Type":"ContainerStarted","Data":"003e6b0220d7dcb350f8f94358fbb81384f523230a0a32f1ffe82961a150677c"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274745 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" event={"ID":"5c4af38b-fd2a-49b5-be40-cbd25eba4bde","Type":"ContainerStarted","Data":"aeac6516f926fd1824724c57efb16274540177c9c44cab85d5905cacec3189c5"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274789 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" event={"ID":"ff629e62-b58e-4d85-aa96-fbc1845b304b","Type":"ContainerStarted","Data":"eb7550f4e431003bb67113687f3142c13f17529aa85082ce1bb3423350829ff7"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274804 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" event={"ID":"1bef0b46-9def-441e-88e8-f481e45026da","Type":"ContainerStarted","Data":"9c6d0cf908ae2a7e6b99e9950491e850d7ad7412abc806efe1711542c45aeecf"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274815 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" event={"ID":"c47b4509-0bb1-4360-9db3-29ebfcd734e3","Type":"ContainerStarted","Data":"8765d8b13fbc965d68b29dcd8d2dfd68578d3842f074689b719b57978f5048c4"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.274828 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.275075 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.276511 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.276785 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.278616 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.282900 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.282921 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.295663 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.295997 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.299766 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.299815 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.300295 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.306103 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.306218 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.306834 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" podStartSLOduration=95.306814883 podStartE2EDuration="1m35.306814883s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.299749573 +0000 UTC m=+119.032088832" watchObservedRunningTime="2026-01-30 00:12:18.306814883 +0000 UTC m=+119.039154112" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.312464 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.312714 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh45z\" (UniqueName: \"kubernetes.io/projected/a15215a1-cfcc-4601-9e8c-1726c1837773-kube-api-access-gh45z\") pod \"migrator-866fcbc849-896z6\" (UID: \"a15215a1-cfcc-4601-9e8c-1726c1837773\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.313093 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.813074162 +0000 UTC m=+119.545413391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.342374 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" event={"ID":"512ba09a-c537-4c10-86c4-6226498ce0e0","Type":"ContainerStarted","Data":"1fa7398b5fc8d48e88e2642907b52993fb46e8c3fe0e037191652c360822dc29"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.342421 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gkzts"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.343699 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.344926 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" podStartSLOduration=95.344905212 podStartE2EDuration="1m35.344905212s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.320987076 +0000 UTC m=+119.053326305" watchObservedRunningTime="2026-01-30 00:12:18.344905212 +0000 UTC m=+119.077244431" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.347344 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.348095 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.348465 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.348629 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.348781 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.385712 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" event={"ID":"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88","Type":"ContainerStarted","Data":"f7b6aa0dfe9755dbc561ff9414c0a1a4dbd0de9bd90824649c1d905755b69138"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.385762 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-4kxpc" event={"ID":"302f79c1-a693-494c-9a1b-360a59d439f5","Type":"ContainerStarted","Data":"e02dca27cb8af76deb0425b9190bc0043c87a255dfa1c4bd9510ec06dab8b283"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.385786 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.385947 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.388301 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.388487 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.412301 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" event={"ID":"df0257f9-bd1a-4915-8db4-aec4ffda4826","Type":"ContainerStarted","Data":"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.412665 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.412832 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.413463 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podStartSLOduration=95.413445941 podStartE2EDuration="1m35.413445941s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.399110435 +0000 UTC m=+119.131449674" watchObservedRunningTime="2026-01-30 00:12:18.413445941 +0000 UTC m=+119.145785160" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.415840 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gh45z\" (UniqueName: \"kubernetes.io/projected/a15215a1-cfcc-4601-9e8c-1726c1837773-kube-api-access-gh45z\") pod \"migrator-866fcbc849-896z6\" (UID: \"a15215a1-cfcc-4601-9e8c-1726c1837773\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.417218 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5cd08e97-a118-4f88-b699-2a0bb507b241-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.417363 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.417823 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47c44fb-6570-4180-9ce2-311e50c7956c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.417954 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd08e97-a118-4f88-b699-2a0bb507b241-config\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.417996 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.418077 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd08e97-a118-4f88-b699-2a0bb507b241-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.419667 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.91965378 +0000 UTC m=+119.651992999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.419839 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47c44fb-6570-4180-9ce2-311e50c7956c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.419930 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd08e97-a118-4f88-b699-2a0bb507b241-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.420887 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4zh\" (UniqueName: \"kubernetes.io/projected/e47c44fb-6570-4180-9ce2-311e50c7956c-kube-api-access-tk4zh\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.421798 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.422488 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.440727 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.441073 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.441722 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29495520-4kxpc" podStartSLOduration=95.441711854 podStartE2EDuration="1m35.441711854s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.423989266 +0000 UTC m=+119.156328645" watchObservedRunningTime="2026-01-30 00:12:18.441711854 +0000 UTC m=+119.174051073" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.449406 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.455131 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh45z\" (UniqueName: \"kubernetes.io/projected/a15215a1-cfcc-4601-9e8c-1726c1837773-kube-api-access-gh45z\") pod \"migrator-866fcbc849-896z6\" (UID: \"a15215a1-cfcc-4601-9e8c-1726c1837773\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.464415 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.485159 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.485197 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.485914 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.486567 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.488331 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.488977 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.489142 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.493864 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.504614 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" event={"ID":"06fbf10a-e423-4033-b4cb-ff77c12973d7","Type":"ContainerStarted","Data":"d35ca8db59a21582f1ee1ca527eee8274976c636aca933676ce10353d36429ea"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.504652 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523103 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.523233 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.023207174 +0000 UTC m=+119.755546393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523444 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523480 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd08e97-a118-4f88-b699-2a0bb507b241-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523500 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/254976ad-3d5f-484e-b3ec-4dbc14567032-webhook-certs\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523526 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47c44fb-6570-4180-9ce2-311e50c7956c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523555 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd08e97-a118-4f88-b699-2a0bb507b241-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523571 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7ns\" (UniqueName: \"kubernetes.io/projected/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-kube-api-access-9p7ns\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523595 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gjk\" (UniqueName: \"kubernetes.io/projected/254976ad-3d5f-484e-b3ec-4dbc14567032-kube-api-access-79gjk\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523620 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvptv\" (UniqueName: \"kubernetes.io/projected/51278e19-fef3-4056-bdb6-f9f60f3a65e0-kube-api-access-cvptv\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523641 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4zh\" (UniqueName: \"kubernetes.io/projected/e47c44fb-6570-4180-9ce2-311e50c7956c-kube-api-access-tk4zh\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523725 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5cd08e97-a118-4f88-b699-2a0bb507b241-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523745 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523769 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523785 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-srv-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523832 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47c44fb-6570-4180-9ce2-311e50c7956c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523882 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51278e19-fef3-4056-bdb6-f9f60f3a65e0-tmpfs\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523921 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd08e97-a118-4f88-b699-2a0bb507b241-config\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523953 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.523972 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-images\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.524451 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5cd08e97-a118-4f88-b699-2a0bb507b241-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.524684 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47c44fb-6570-4180-9ce2-311e50c7956c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.524889 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd08e97-a118-4f88-b699-2a0bb507b241-config\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.525083 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.025066354 +0000 UTC m=+119.757405573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.527538 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.527571 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.530938 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47c44fb-6570-4180-9ce2-311e50c7956c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.531651 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd08e97-a118-4f88-b699-2a0bb507b241-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.555409 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd08e97-a118-4f88-b699-2a0bb507b241-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x97vp\" (UID: \"5cd08e97-a118-4f88-b699-2a0bb507b241\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.562194 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4zh\" (UniqueName: \"kubernetes.io/projected/e47c44fb-6570-4180-9ce2-311e50c7956c-kube-api-access-tk4zh\") pod \"kube-storage-version-migrator-operator-565b79b866-6zt5n\" (UID: \"e47c44fb-6570-4180-9ce2-311e50c7956c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.565489 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.567975 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" event={"ID":"ff5aaf4d-9812-4773-bcd9-a6901952e242","Type":"ContainerStarted","Data":"a6930d457824529cf94e22e6fb6bf202c249abc09bd79c549a303d2360c3ca81"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.568029 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" event={"ID":"a92e1ebb-86ac-4456-873b-ce575e9cda12","Type":"ContainerStarted","Data":"2d7bd8eb2b37f01baf28b5ebf75bc74d0191dcf1595af0d95f4b2626e3ac6b27"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.568039 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" event={"ID":"5c4af38b-fd2a-49b5-be40-cbd25eba4bde","Type":"ContainerStarted","Data":"84dfcce3335c91e89df6644865bda2f1903bb9d1755b3340a726227f81cd497c"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.568048 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" event={"ID":"d158deef-46a2-4f4b-bd06-fce37341fa01","Type":"ContainerStarted","Data":"d5546572827c68329f732ac9ba2ec43eef1b7ee6b318e90508b2c80c46557834"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.568057 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" event={"ID":"1bef0b46-9def-441e-88e8-f481e45026da","Type":"ContainerDied","Data":"338e72eeb0ce8824e99cf3a4f499b4422ce36ab98718339666e4257f685ae5fa"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.568082 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgflv"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.582955 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.584249 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.584367 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.595720 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.601942 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-m6rzk" event={"ID":"a1f8c00b-3459-4b15-ab8c-52407669c50a","Type":"ContainerStarted","Data":"65ad0a1ab31b9a95575c9d86939b50ee0156a77d143eb293cea5bb02d65b82a3"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.601977 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" event={"ID":"c47b4509-0bb1-4360-9db3-29ebfcd734e3","Type":"ContainerStarted","Data":"0ab9b2bb77fcaead421f25524b00b1e84579a0a28da49dbf7861e4ab78eb4ada"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.601996 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" event={"ID":"512ba09a-c537-4c10-86c4-6226498ce0e0","Type":"ContainerStarted","Data":"0ace94a050c9859659a9d17b5e7db881435c9414e7c8d83079ac6af444c01895"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.602007 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-kzx6r" event={"ID":"6fd43d75-51fe-42d6-9f2a-adbe6045f25c","Type":"ContainerStarted","Data":"06954309d68b9c08b361fc58bda7ede888aabc13ab95b6e9ce9bc837a4984fcb"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.602022 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"26b50ed65bb557982949793e9975ac545e1b9c5d442978287995e9e40aa3d446"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.602033 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.602124 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.603611 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.619401 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.619257 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"99b3cb5b607f6a0042ce400c70ac13fdd106630d42a0ac0bba4f2be35c99ddb0"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.620280 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.622449 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624528 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.624694 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.124670403 +0000 UTC m=+119.857009622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624762 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvwv\" (UniqueName: \"kubernetes.io/projected/aaa499cf-4449-4b32-9182-39c7d73cf064-kube-api-access-xzvwv\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624820 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-webhook-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624844 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvwj\" (UniqueName: \"kubernetes.io/projected/3f7789da-fc14-4144-8d2e-44a08ce5dd85-kube-api-access-xsvwj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624921 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10904391-ad3c-46eb-8147-c32c0612487c-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.624970 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aaa499cf-4449-4b32-9182-39c7d73cf064-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625049 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625100 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625147 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-srv-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625180 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eebb380-6a1e-49b2-bd63-222bc499058b-config\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625222 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10904391-ad3c-46eb-8147-c32c0612487c-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625246 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbl56\" (UniqueName: \"kubernetes.io/projected/9eebb380-6a1e-49b2-bd63-222bc499058b-kube-api-access-cbl56\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625274 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-apiservice-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625298 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51278e19-fef3-4056-bdb6-f9f60f3a65e0-tmpfs\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625350 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625373 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-images\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625399 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqdw\" (UniqueName: \"kubernetes.io/projected/cd4db4af-c1ef-4771-88bf-d372af1849fa-kube-api-access-wpqdw\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625419 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eebb380-6a1e-49b2-bd63-222bc499058b-serving-cert\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625459 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625481 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/254976ad-3d5f-484e-b3ec-4dbc14567032-webhook-certs\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625519 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqcv2\" (UniqueName: \"kubernetes.io/projected/10904391-ad3c-46eb-8147-c32c0612487c-kube-api-access-qqcv2\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625540 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f7789da-fc14-4144-8d2e-44a08ce5dd85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625571 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9p7ns\" (UniqueName: \"kubernetes.io/projected/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-kube-api-access-9p7ns\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625598 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-79gjk\" (UniqueName: \"kubernetes.io/projected/254976ad-3d5f-484e-b3ec-4dbc14567032-kube-api-access-79gjk\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625617 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd4db4af-c1ef-4771-88bf-d372af1849fa-tmpfs\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.625642 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvptv\" (UniqueName: \"kubernetes.io/projected/51278e19-fef3-4056-bdb6-f9f60f3a65e0-kube-api-access-cvptv\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.626468 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.626525 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51278e19-fef3-4056-bdb6-f9f60f3a65e0-tmpfs\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.626686 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-images\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.626741 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.126732178 +0000 UTC m=+119.859071397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.632412 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.632547 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/51278e19-fef3-4056-bdb6-f9f60f3a65e0-srv-cert\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.633513 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.634301 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/254976ad-3d5f-484e-b3ec-4dbc14567032-webhook-certs\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.639442 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.639636 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.643960 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:12:18 crc kubenswrapper[5104]: W0130 00:12:18.658899 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod994594a7_ccc0_4f06_84ca_89f4e3561a2f.slice/crio-f3ca854f8a6354d85a301d8460774c632c3d2991ffca2961b6b037bf8316a5d6 WatchSource:0}: Error finding container f3ca854f8a6354d85a301d8460774c632c3d2991ffca2961b6b037bf8316a5d6: Status 404 returned error can't find the container with id f3ca854f8a6354d85a301d8460774c632c3d2991ffca2961b6b037bf8316a5d6 Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.662597 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666031 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5" event={"ID":"211c8215-e1c1-4bb9-881e-d2570dead87e","Type":"ContainerStarted","Data":"e11f8f8c0905b3bc5180167e1e63ac7806bd085beb207af216efa48f5c479b59"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666070 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" event={"ID":"bad64408-8e74-460b-b652-f12f5920cd21","Type":"ContainerStarted","Data":"362fc795e929f720465474193cfab9403ca2b891381a7d7211411655f95fc86b"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666085 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666099 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" event={"ID":"de15ba83-bde1-43f2-b924-65926e8a4565","Type":"ContainerStarted","Data":"7fff781e8b1f3d4321ba43c66cf7c3812f1f383e77a831249a995f38929a4fa5"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666112 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xkl2m"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666125 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-4kxpc"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666135 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.666222 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.703879 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v56dx"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.703929 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-m6rzk"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.703962 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.703956 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" event={"ID":"5b96d7cb-4106-4adb-baab-92ec201306e2","Type":"ContainerStarted","Data":"8178c04bb5a3cd5548b4d0e5ef12dbabccc70479bbc5bd089633f39bbf0dd624"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.703387 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.704067 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-mh68h"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.704448 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" event={"ID":"d5e98104-79f7-4fb8-b554-f705833000a1","Type":"ContainerStarted","Data":"a3870c70201257285bebe6cb1743ba0d37d1fe229064bf433cf7defed65bc525"} Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.704480 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-t2sbz"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.725210 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vhqhg"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.725616 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.725881 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726442 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726590 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-cabundle\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726618 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c01d6b4-a210-4e12-bb14-2694d7e41659-serving-cert\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726643 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd4db4af-c1ef-4771-88bf-d372af1849fa-tmpfs\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726693 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-srv-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726728 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-config\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726751 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-key\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726773 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726798 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvwv\" (UniqueName: \"kubernetes.io/projected/aaa499cf-4449-4b32-9182-39c7d73cf064-kube-api-access-xzvwv\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726821 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5mk\" (UniqueName: \"kubernetes.io/projected/d559e43d-60f9-4f29-8d4e-c595cad2bd22-kube-api-access-pg5mk\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726868 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-webhook-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726885 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvwj\" (UniqueName: \"kubernetes.io/projected/3f7789da-fc14-4144-8d2e-44a08ce5dd85-kube-api-access-xsvwj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.726902 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10904391-ad3c-46eb-8147-c32c0612487c-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.727375 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.227357415 +0000 UTC m=+119.959696634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727618 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aaa499cf-4449-4b32-9182-39c7d73cf064-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727661 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727825 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd4db4af-c1ef-4771-88bf-d372af1849fa-tmpfs\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727721 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eebb380-6a1e-49b2-bd63-222bc499058b-config\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727928 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10904391-ad3c-46eb-8147-c32c0612487c-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727946 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbl56\" (UniqueName: \"kubernetes.io/projected/9eebb380-6a1e-49b2-bd63-222bc499058b-kube-api-access-cbl56\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.727967 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-apiservice-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728000 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728017 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728043 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728072 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqdw\" (UniqueName: \"kubernetes.io/projected/cd4db4af-c1ef-4771-88bf-d372af1849fa-kube-api-access-wpqdw\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728156 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eebb380-6a1e-49b2-bd63-222bc499058b-serving-cert\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728188 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfncx\" (UniqueName: \"kubernetes.io/projected/2c01d6b4-a210-4e12-bb14-2694d7e41659-kube-api-access-xfncx\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728208 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728458 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d559e43d-60f9-4f29-8d4e-c595cad2bd22-tmpfs\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728533 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdl72\" (UniqueName: \"kubernetes.io/projected/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-kube-api-access-hdl72\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728563 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqcv2\" (UniqueName: \"kubernetes.io/projected/10904391-ad3c-46eb-8147-c32c0612487c-kube-api-access-qqcv2\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728583 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f7789da-fc14-4144-8d2e-44a08ce5dd85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728600 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728620 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx79z\" (UniqueName: \"kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.728628 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10904391-ad3c-46eb-8147-c32c0612487c-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.728864 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.228838235 +0000 UTC m=+119.961177454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: W0130 00:12:18.731868 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda15215a1_cfcc_4601_9e8c_1726c1837773.slice/crio-3ea55122a2884fb5aca9d9872104f9af0c29d9da7d3e48472aafd96045ee7c01 WatchSource:0}: Error finding container 3ea55122a2884fb5aca9d9872104f9af0c29d9da7d3e48472aafd96045ee7c01: Status 404 returned error can't find the container with id 3ea55122a2884fb5aca9d9872104f9af0c29d9da7d3e48472aafd96045ee7c01 Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.732513 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-apiservice-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.733477 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10904391-ad3c-46eb-8147-c32c0612487c-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.734278 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4db4af-c1ef-4771-88bf-d372af1849fa-webhook-cert\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.737087 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aaa499cf-4449-4b32-9182-39c7d73cf064-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.737204 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eebb380-6a1e-49b2-bd63-222bc499058b-config\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.739919 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eebb380-6a1e-49b2-bd63-222bc499058b-serving-cert\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.742808 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.743960 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f7789da-fc14-4144-8d2e-44a08ce5dd85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.752373 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754198 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754232 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r9m28"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754243 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754253 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754262 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754270 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754279 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754287 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754295 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754303 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754312 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-675xg"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754320 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754328 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754325 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754338 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-kzx6r"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754569 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-g4wlb"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754582 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754591 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754600 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.754611 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2f4tq"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.763906 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.781141 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-72hww"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.781329 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.794767 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.804232 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.822784 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.829470 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.829714 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4bp\" (UniqueName: \"kubernetes.io/projected/73af8639-1dae-4861-8165-94c6c5410e1b-kube-api-access-ll4bp\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.829799 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-certs\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.829887 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmp6h\" (UniqueName: \"kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.829987 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830057 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830134 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfncx\" (UniqueName: \"kubernetes.io/projected/2c01d6b4-a210-4e12-bb14-2694d7e41659-kube-api-access-xfncx\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830198 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830260 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830332 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7xk\" (UniqueName: \"kubernetes.io/projected/c1186351-b63f-4a39-b8e6-e01f0b686544-kube-api-access-vl7xk\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830433 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d559e43d-60f9-4f29-8d4e-c595cad2bd22-tmpfs\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830498 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdl72\" (UniqueName: \"kubernetes.io/projected/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-kube-api-access-hdl72\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830583 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830654 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cx79z\" (UniqueName: \"kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830718 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-node-bootstrap-token\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830794 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-cabundle\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830872 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c01d6b4-a210-4e12-bb14-2694d7e41659-serving-cert\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.830966 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-auth-proxy-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.831029 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.331008833 +0000 UTC m=+120.063348042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.831693 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d559e43d-60f9-4f29-8d4e-c595cad2bd22-tmpfs\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832434 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832598 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-srv-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832629 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-config\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832682 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1186351-b63f-4a39-b8e6-e01f0b686544-machine-approver-tls\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832711 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-key\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832736 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832770 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5mk\" (UniqueName: \"kubernetes.io/projected/d559e43d-60f9-4f29-8d4e-c595cad2bd22-kube-api-access-pg5mk\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832802 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.832607 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-cabundle\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.833732 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.833818 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836594 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgflv"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836630 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836643 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-d9wqk"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836652 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836663 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836671 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836684 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836693 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836701 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836709 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72hww"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836716 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gkzts"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836726 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836735 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836744 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836752 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhqhg"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.836765 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-862pd"] Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.837221 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72hww" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.838122 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-signing-key\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.841713 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.852084 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d559e43d-60f9-4f29-8d4e-c595cad2bd22-srv-cert\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.863607 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.865974 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.874743 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.877218 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.889041 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.902609 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937739 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1186351-b63f-4a39-b8e6-e01f0b686544-machine-approver-tls\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937781 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-config-volume\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937806 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69495734-d360-40f9-bf1c-98a808e7f987-cert\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937828 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937884 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-metrics-tls\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937907 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937967 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.937997 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqrw\" (UniqueName: \"kubernetes.io/projected/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-kube-api-access-jtqrw\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938027 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdkn\" (UniqueName: \"kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938060 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4bp\" (UniqueName: \"kubernetes.io/projected/73af8639-1dae-4861-8165-94c6c5410e1b-kube-api-access-ll4bp\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938080 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-certs\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938131 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmp6h\" (UniqueName: \"kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938159 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938216 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938257 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-tmp-dir\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938613 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2mh\" (UniqueName: \"kubernetes.io/projected/69495734-d360-40f9-bf1c-98a808e7f987-kube-api-access-gn2mh\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938665 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938763 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7xk\" (UniqueName: \"kubernetes.io/projected/c1186351-b63f-4a39-b8e6-e01f0b686544-kube-api-access-vl7xk\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938812 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-node-bootstrap-token\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938842 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.938983 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-auth-proxy-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:18 crc kubenswrapper[5104]: E0130 00:12:18.940709 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.440693794 +0000 UTC m=+120.173033013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.945390 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.958461 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gjk\" (UniqueName: \"kubernetes.io/projected/254976ad-3d5f-484e-b3ec-4dbc14567032-kube-api-access-79gjk\") pod \"multus-admission-controller-69db94689b-gkzts\" (UID: \"254976ad-3d5f-484e-b3ec-4dbc14567032\") " pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.982082 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvptv\" (UniqueName: \"kubernetes.io/projected/51278e19-fef3-4056-bdb6-f9f60f3a65e0-kube-api-access-cvptv\") pod \"olm-operator-5cdf44d969-pdjtd\" (UID: \"51278e19-fef3-4056-bdb6-f9f60f3a65e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.987274 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.987317 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.988502 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5104]: I0130 00:12:18.997061 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p7ns\" (UniqueName: \"kubernetes.io/projected/ac96d3d5-fde2-4526-9d1d-ed33ebf8a909-kube-api-access-9p7ns\") pod \"machine-config-operator-67c9d58cbb-dmtb5\" (UID: \"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.006551 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.022819 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.024135 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-config\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.041926 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042231 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mdkn\" (UniqueName: \"kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042399 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042518 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-tmp-dir\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042608 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2mh\" (UniqueName: \"kubernetes.io/projected/69495734-d360-40f9-bf1c-98a808e7f987-kube-api-access-gn2mh\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042793 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.042927 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-config-volume\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043007 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69495734-d360-40f9-bf1c-98a808e7f987-cert\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043077 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.043239 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.543215421 +0000 UTC m=+120.275554640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043456 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043247 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043583 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-metrics-tls\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043728 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqrw\" (UniqueName: \"kubernetes.io/projected/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-kube-api-access-jtqrw\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.043862 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.044303 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-tmp-dir\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.044927 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.048391 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd08e97_a118_4f88_b699_2a0bb507b241.slice/crio-5955c8c3d49eb90085e5d4dd8c43613982bb6d5551800f675a463de783edc0a3 WatchSource:0}: Error finding container 5955c8c3d49eb90085e5d4dd8c43613982bb6d5551800f675a463de783edc0a3: Status 404 returned error can't find the container with id 5955c8c3d49eb90085e5d4dd8c43613982bb6d5551800f675a463de783edc0a3 Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.056897 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47c44fb_6570_4180_9ce2_311e50c7956c.slice/crio-15d689b1a0c27aef7050f7121e375ea4043638517cc5f40470f334d4947b3ca4 WatchSource:0}: Error finding container 15d689b1a0c27aef7050f7121e375ea4043638517cc5f40470f334d4947b3ca4: Status 404 returned error can't find the container with id 15d689b1a0c27aef7050f7121e375ea4043638517cc5f40470f334d4947b3ca4 Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.062537 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.066472 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c01d6b4-a210-4e12-bb14-2694d7e41659-serving-cert\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.082733 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.111490 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.113056 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c01d6b4-a210-4e12-bb14-2694d7e41659-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.122613 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.132607 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.143769 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.144668 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.144987 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.644973677 +0000 UTC m=+120.377312896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.147155 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.154351 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.160428 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.194233 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.203207 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.222594 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.230910 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-auth-proxy-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.243229 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.246181 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.246548 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.746530039 +0000 UTC m=+120.478869258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.255252 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1186351-b63f-4a39-b8e6-e01f0b686544-machine-approver-tls\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.265529 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.272963 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1186351-b63f-4a39-b8e6-e01f0b686544-config\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.282674 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.287790 5104 generic.go:358] "Generic (PLEG): container finished" podID="512ba09a-c537-4c10-86c4-6226498ce0e0" containerID="0ace94a050c9859659a9d17b5e7db881435c9414e7c8d83079ac6af444c01895" exitCode=0 Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.303745 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.322252 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.330080 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-certs\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.344488 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.349991 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.350518 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.850492145 +0000 UTC m=+120.582831364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.359083 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73af8639-1dae-4861-8165-94c6c5410e1b-node-bootstrap-token\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.380265 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvwv\" (UniqueName: \"kubernetes.io/projected/aaa499cf-4449-4b32-9182-39c7d73cf064-kube-api-access-xzvwv\") pod \"package-server-manager-77f986bd66-6rcx2\" (UID: \"aaa499cf-4449-4b32-9182-39c7d73cf064\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.394358 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod254976ad_3d5f_484e_b3ec_4dbc14567032.slice/crio-23cd14abdb4eeaa5762009018d53ce3e4d646d6e71637ff87820cfb2c5928bcc WatchSource:0}: Error finding container 23cd14abdb4eeaa5762009018d53ce3e4d646d6e71637ff87820cfb2c5928bcc: Status 404 returned error can't find the container with id 23cd14abdb4eeaa5762009018d53ce3e4d646d6e71637ff87820cfb2c5928bcc Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.404510 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvwj\" (UniqueName: \"kubernetes.io/projected/3f7789da-fc14-4144-8d2e-44a08ce5dd85-kube-api-access-xsvwj\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9lr7t\" (UID: \"3f7789da-fc14-4144-8d2e-44a08ce5dd85\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.414870 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbl56\" (UniqueName: \"kubernetes.io/projected/9eebb380-6a1e-49b2-bd63-222bc499058b-kube-api-access-cbl56\") pod \"service-ca-operator-5b9c976747-bn2ph\" (UID: \"9eebb380-6a1e-49b2-bd63-222bc499058b\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.437243 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqdw\" (UniqueName: \"kubernetes.io/projected/cd4db4af-c1ef-4771-88bf-d372af1849fa-kube-api-access-wpqdw\") pod \"packageserver-7d4fc7d867-grfh9\" (UID: \"cd4db4af-c1ef-4771-88bf-d372af1849fa\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.451137 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.451288 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.951252505 +0000 UTC m=+120.683591734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.451451 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.451758 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.951739468 +0000 UTC m=+120.684078717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.457866 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqcv2\" (UniqueName: \"kubernetes.io/projected/10904391-ad3c-46eb-8147-c32c0612487c-kube-api-access-qqcv2\") pod \"machine-config-controller-f9cdd68f7-tgcbf\" (UID: \"10904391-ad3c-46eb-8147-c32c0612487c\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.463080 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.486050 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488197 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" event={"ID":"e47c44fb-6570-4180-9ce2-311e50c7956c","Type":"ContainerStarted","Data":"15d689b1a0c27aef7050f7121e375ea4043638517cc5f40470f334d4947b3ca4"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488266 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-862pd"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488299 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" event={"ID":"656c26bb-2611-460d-b115-ad18f57cc138","Type":"ContainerStarted","Data":"47e38de9a623c9c992c4bd08d3a3a84d07b8c8c684b1fd6a258b346169800e92"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488318 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" event={"ID":"a15215a1-cfcc-4601-9e8c-1726c1837773","Type":"ContainerStarted","Data":"3ea55122a2884fb5aca9d9872104f9af0c29d9da7d3e48472aafd96045ee7c01"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488356 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488375 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" event={"ID":"8549d8ab-08fd-4d10-b03e-d162d745184a","Type":"ContainerStarted","Data":"71e03e6c204cda88605299799b24d4f0ff4593790111ae8a78ae4c333382c909"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488402 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" event={"ID":"ff629e62-b58e-4d85-aa96-fbc1845b304b","Type":"ContainerStarted","Data":"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488430 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" event={"ID":"512ba09a-c537-4c10-86c4-6226498ce0e0","Type":"ContainerDied","Data":"0ace94a050c9859659a9d17b5e7db881435c9414e7c8d83079ac6af444c01895"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488457 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" event={"ID":"ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88","Type":"ContainerStarted","Data":"25b124aa9aa138ce7dcb5d605f1e2bb4ace67c824a487ef95e5819f906ea7b29"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488522 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" event={"ID":"994594a7-ccc0-4f06-84ca-89f4e3561a2f","Type":"ContainerStarted","Data":"f3ca854f8a6354d85a301d8460774c632c3d2991ffca2961b6b037bf8316a5d6"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488556 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"072996a8bb27823cbf07bacf320fe8e294f3e9c44b8382f4ac79ada5a4065d00"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488570 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" event={"ID":"5cd08e97-a118-4f88-b699-2a0bb507b241","Type":"ContainerStarted","Data":"5955c8c3d49eb90085e5d4dd8c43613982bb6d5551800f675a463de783edc0a3"} Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488596 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-mh68h"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488617 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-d9wqk"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488628 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488638 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-4kxpc"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488648 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-v56dx"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488659 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xkl2m"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488670 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-d6sw5"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488680 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488692 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488702 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488716 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gvjb6"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488747 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r9m28"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488764 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488773 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-675xg"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488782 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-kzx6r"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488795 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-g4wlb"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488804 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488815 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-m6rzk"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488824 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.488838 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.489096 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.489347 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.497488 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69495734-d360-40f9-bf1c-98a808e7f987-cert\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.506569 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.508332 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.523993 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.545430 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.548540 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.549069 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.550293 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.553106 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.553394 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.053380222 +0000 UTC m=+120.785719441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.553598 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.554121 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.556396 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.557737 5104 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-g766x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.557865 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.561630 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gkzts"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.577554 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.582077 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.583683 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfncx\" (UniqueName: \"kubernetes.io/projected/2c01d6b4-a210-4e12-bb14-2694d7e41659-kube-api-access-xfncx\") pod \"authentication-operator-7f5c659b84-sdlg2\" (UID: \"2c01d6b4-a210-4e12-bb14-2694d7e41659\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.608761 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.609612 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdl72\" (UniqueName: \"kubernetes.io/projected/6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6-kube-api-access-hdl72\") pod \"service-ca-74545575db-fgflv\" (UID: \"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6\") " pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.610845 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.618914 5104 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-l7gdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.618956 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.620961 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.629997 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx79z\" (UniqueName: \"kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z\") pod \"marketplace-operator-547dbd544d-mb4lh\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.639351 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5mk\" (UniqueName: \"kubernetes.io/projected/d559e43d-60f9-4f29-8d4e-c595cad2bd22-kube-api-access-pg5mk\") pod \"catalog-operator-75ff9f647d-jcrzz\" (UID: \"d559e43d-60f9-4f29-8d4e-c595cad2bd22\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.647501 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.658956 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-mountpoint-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.659233 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.659354 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwhr\" (UniqueName: \"kubernetes.io/projected/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-kube-api-access-bgwhr\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.659744 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-registration-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.659874 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-csi-data-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.659998 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-plugins-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.660147 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-socket-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.664056 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.164040549 +0000 UTC m=+120.896379768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.669334 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.669763 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.674216 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-config-volume\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.675438 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51278e19_fef3_4056_bdb6_f9f60f3a65e0.slice/crio-05da00975995c07d4274292c920b6bf9b567ab759ed0309910afc84ec18479e0 WatchSource:0}: Error finding container 05da00975995c07d4274292c920b6bf9b567ab759ed0309910afc84ec18479e0: Status 404 returned error can't find the container with id 05da00975995c07d4274292c920b6bf9b567ab759ed0309910afc84ec18479e0 Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.688283 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.704129 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-metrics-tls\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.717496 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4bp\" (UniqueName: \"kubernetes.io/projected/73af8639-1dae-4861-8165-94c6c5410e1b-kube-api-access-ll4bp\") pod \"machine-config-server-t2sbz\" (UID: \"73af8639-1dae-4861-8165-94c6c5410e1b\") " pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.725459 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.740947 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmp6h\" (UniqueName: \"kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h\") pod \"collect-profiles-29495520-7xhcr\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.760056 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7xk\" (UniqueName: \"kubernetes.io/projected/c1186351-b63f-4a39-b8e6-e01f0b686544-kube-api-access-vl7xk\") pod \"machine-approver-54c688565-wmdwr\" (UID: \"c1186351-b63f-4a39-b8e6-e01f0b686544\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.761537 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.764882 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.765047 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.265026615 +0000 UTC m=+120.997365834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765165 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-registration-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765210 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-csi-data-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765241 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-plugins-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765282 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-socket-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765304 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-mountpoint-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765378 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.765432 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwhr\" (UniqueName: \"kubernetes.io/projected/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-kube-api-access-bgwhr\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.766123 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-mountpoint-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.766202 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-csi-data-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.766443 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-plugins-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.766551 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-socket-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.766580 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-registration-dir\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.766923 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.266912675 +0000 UTC m=+120.999251894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.778723 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mdkn\" (UniqueName: \"kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn\") pod \"cni-sysctl-allowlist-ds-2f4tq\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.791474 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fgflv" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.795745 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.806465 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.814762 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.817830 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqrw\" (UniqueName: \"kubernetes.io/projected/70d57bd5-86d2-4a87-baac-b6c03e6b5cb2-kube-api-access-jtqrw\") pod \"dns-default-72hww\" (UID: \"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2\") " pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.821641 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2mh\" (UniqueName: \"kubernetes.io/projected/69495734-d360-40f9-bf1c-98a808e7f987-kube-api-access-gn2mh\") pod \"ingress-canary-vhqhg\" (UID: \"69495734-d360-40f9-bf1c-98a808e7f987\") " pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.827636 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.833369 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.842734 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t2sbz" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.843271 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.843309 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa499cf_4449_4b32_9182_39c7d73cf064.slice/crio-383b4d348b53f531c6ef9b42b9b9e629f3764d9e5b1109a60af77bc5264028aa WatchSource:0}: Error finding container 383b4d348b53f531c6ef9b42b9b9e629f3764d9e5b1109a60af77bc5264028aa: Status 404 returned error can't find the container with id 383b4d348b53f531c6ef9b42b9b9e629f3764d9e5b1109a60af77bc5264028aa Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.850708 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhqhg" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.858725 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.864052 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.866607 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-72hww" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.867077 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.867388 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.367372537 +0000 UTC m=+121.099711756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.877603 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.884086 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.919801 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf"] Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.922894 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwhr\" (UniqueName: \"kubernetes.io/projected/6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c-kube-api-access-bgwhr\") pod \"csi-hostpathplugin-862pd\" (UID: \"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c\") " pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.936828 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1186351_b63f_4a39_b8e6_e01f0b686544.slice/crio-7a06b6d1bf362b148e96179e81863ced0dce35961e09b79545557954ebb7db81 WatchSource:0}: Error finding container 7a06b6d1bf362b148e96179e81863ced0dce35961e09b79545557954ebb7db81: Status 404 returned error can't find the container with id 7a06b6d1bf362b148e96179e81863ced0dce35961e09b79545557954ebb7db81 Jan 30 00:12:19 crc kubenswrapper[5104]: W0130 00:12:19.958021 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10904391_ad3c_46eb_8147_c32c0612487c.slice/crio-a72e1d75a355e80be282bb14b89adf91671d96e870744bee9c14ae6bf8dce460 WatchSource:0}: Error finding container a72e1d75a355e80be282bb14b89adf91671d96e870744bee9c14ae6bf8dce460: Status 404 returned error can't find the container with id a72e1d75a355e80be282bb14b89adf91671d96e870744bee9c14ae6bf8dce460 Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.970232 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:19 crc kubenswrapper[5104]: E0130 00:12:19.970652 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.470634245 +0000 UTC m=+121.202973474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.995170 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:19 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:19 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:19 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:19 crc kubenswrapper[5104]: I0130 00:12:19.995227 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.071522 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.071842 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.571778615 +0000 UTC m=+121.304117834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.072048 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.072882 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.572874045 +0000 UTC m=+121.305213264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.108079 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9"] Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.128585 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-862pd" Jan 30 00:12:20 crc kubenswrapper[5104]: W0130 00:12:20.140525 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73af8639_1dae_4861_8165_94c6c5410e1b.slice/crio-2817e66f2a96eac789289d03a72df1e77c856ebd3cb8c3ee4de8929194f744e5 WatchSource:0}: Error finding container 2817e66f2a96eac789289d03a72df1e77c856ebd3cb8c3ee4de8929194f744e5: Status 404 returned error can't find the container with id 2817e66f2a96eac789289d03a72df1e77c856ebd3cb8c3ee4de8929194f744e5 Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.166789 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph"] Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.173910 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.174616 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.674601021 +0000 UTC m=+121.406940240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: W0130 00:12:20.185924 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd4db4af_c1ef_4771_88bf_d372af1849fa.slice/crio-cc1a38c6f0d97e132f9d2ae6eed363106ba01328396c268cbc19e7cc51f14258 WatchSource:0}: Error finding container cc1a38c6f0d97e132f9d2ae6eed363106ba01328396c268cbc19e7cc51f14258: Status 404 returned error can't find the container with id cc1a38c6f0d97e132f9d2ae6eed363106ba01328396c268cbc19e7cc51f14258 Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.218592 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-jd2mk" podStartSLOduration=97.218517426 podStartE2EDuration="1m37.218517426s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.175026362 +0000 UTC m=+120.907365581" watchObservedRunningTime="2026-01-30 00:12:20.218517426 +0000 UTC m=+120.950856645" Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.282142 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.282541 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.782529044 +0000 UTC m=+121.514868263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.357416 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" podStartSLOduration=97.357388784 podStartE2EDuration="1m37.357388784s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.349970625 +0000 UTC m=+121.082309844" watchObservedRunningTime="2026-01-30 00:12:20.357388784 +0000 UTC m=+121.089728003" Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.373560 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" event={"ID":"a15215a1-cfcc-4601-9e8c-1726c1837773","Type":"ContainerStarted","Data":"c639ff830708f62b29b18bb9d0bd0a2f48c20fbd134ff10efad0206c06e06d08"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.383953 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.385006 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.88498856 +0000 UTC m=+121.617327779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.399210 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" event={"ID":"cd4db4af-c1ef-4771-88bf-d372af1849fa","Type":"ContainerStarted","Data":"cc1a38c6f0d97e132f9d2ae6eed363106ba01328396c268cbc19e7cc51f14258"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.402364 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" event={"ID":"5c4af38b-fd2a-49b5-be40-cbd25eba4bde","Type":"ContainerStarted","Data":"d7401541e8ca228c5ba8d30593d0af2ba0c566600342c3df3a9db6e5bb1ae0ed"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.409583 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" event={"ID":"d158deef-46a2-4f4b-bd06-fce37341fa01","Type":"ContainerStarted","Data":"f8a7268766227c70df9036c16cb979f66f49898567c025835f516512c2024428"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.413471 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" event={"ID":"9eebb380-6a1e-49b2-bd63-222bc499058b","Type":"ContainerStarted","Data":"82326e27b3de2fcdc80ea2790d42ec3dedde4d6c4dd0e901aa0fbcf600892311"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.436972 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" event={"ID":"10904391-ad3c-46eb-8147-c32c0612487c","Type":"ContainerStarted","Data":"a72e1d75a355e80be282bb14b89adf91671d96e870744bee9c14ae6bf8dce460"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.451245 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" podStartSLOduration=96.451221448 podStartE2EDuration="1m36.451221448s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.431212098 +0000 UTC m=+121.163551317" watchObservedRunningTime="2026-01-30 00:12:20.451221448 +0000 UTC m=+121.183560667" Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.487945 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.488315 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.988302079 +0000 UTC m=+121.720641298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.491648 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" event={"ID":"1bef0b46-9def-441e-88e8-f481e45026da","Type":"ContainerStarted","Data":"13f5cd999a0eb4931cfffc4fadde77acf5ca83f5c6f4a7d7da600b12d8a5fe09"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.522581 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-m6rzk" event={"ID":"a1f8c00b-3459-4b15-ab8c-52407669c50a","Type":"ContainerStarted","Data":"97d3033d10be6a3adaf7bb21c58d4c438ca434f7d7dda289464deadb739d81a6"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.588976 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" event={"ID":"994594a7-ccc0-4f06-84ca-89f4e3561a2f","Type":"ContainerStarted","Data":"348f617260581cd19b287030ded66243072264720cb9926ae6dab3f85e1c6638"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589341 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-kzx6r" event={"ID":"6fd43d75-51fe-42d6-9f2a-adbe6045f25c","Type":"ContainerStarted","Data":"bdd06d23c04394ee9149ab8327b0803dc85d394dfb8d66201f6eaa6eecd12546"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589354 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" event={"ID":"3b2a92e1-d95a-4a3e-a07e-62e5100931bb","Type":"ContainerStarted","Data":"061781b0b7a45da2281f7da3c5e490d113b2969ea0a87d7867374ded90c21363"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589367 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" event={"ID":"c1186351-b63f-4a39-b8e6-e01f0b686544","Type":"ContainerStarted","Data":"7a06b6d1bf362b148e96179e81863ced0dce35961e09b79545557954ebb7db81"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589378 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhqhg"] Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589395 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" event={"ID":"3f7789da-fc14-4144-8d2e-44a08ce5dd85","Type":"ContainerStarted","Data":"a95da07dcb49f6d501c45f50f4c9f3c74706b6ef5b4dd477302dd607c6884abf"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589404 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" event={"ID":"aaa499cf-4449-4b32-9182-39c7d73cf064","Type":"ContainerStarted","Data":"383b4d348b53f531c6ef9b42b9b9e629f3764d9e5b1109a60af77bc5264028aa"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589413 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" event={"ID":"bad64408-8e74-460b-b652-f12f5920cd21","Type":"ContainerStarted","Data":"8fbf660b8ecb6a631398ec516833f08f30541a59b419ad863bf33b32043bb510"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589424 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" event={"ID":"de15ba83-bde1-43f2-b924-65926e8a4565","Type":"ContainerStarted","Data":"3eabb1f8a2a58dee35d35b7257047e965e2242497b526be7b38144000819a951"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.589898 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.590281 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.09023745 +0000 UTC m=+121.822576669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.590703 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t2sbz" event={"ID":"73af8639-1dae-4861-8165-94c6c5410e1b","Type":"ContainerStarted","Data":"2817e66f2a96eac789289d03a72df1e77c856ebd3cb8c3ee4de8929194f744e5"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.592194 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" event={"ID":"51278e19-fef3-4056-bdb6-f9f60f3a65e0","Type":"ContainerStarted","Data":"05da00975995c07d4274292c920b6bf9b567ab759ed0309910afc84ec18479e0"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.595573 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" event={"ID":"d5e98104-79f7-4fb8-b554-f705833000a1","Type":"ContainerStarted","Data":"583429243d226ebcdfee0db308fc95287126016d0926e727b14d4f598f490e61"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.597782 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" event={"ID":"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909","Type":"ContainerStarted","Data":"dcfc3ecaea52362be09d1f64e5014e11d0a4dfe4314f128599ad732679d40a33"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.600526 5104 generic.go:358] "Generic (PLEG): container finished" podID="06fbf10a-e423-4033-b4cb-ff77c12973d7" containerID="5c471f3a3e540b368db6093f967db2c396dee089a8b53f8ee2ac75f8bbeb0a70" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.600698 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" event={"ID":"06fbf10a-e423-4033-b4cb-ff77c12973d7","Type":"ContainerDied","Data":"5c471f3a3e540b368db6093f967db2c396dee089a8b53f8ee2ac75f8bbeb0a70"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.619327 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-mh68h" podStartSLOduration=97.619307835 podStartE2EDuration="1m37.619307835s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.618502033 +0000 UTC m=+121.350841252" watchObservedRunningTime="2026-01-30 00:12:20.619307835 +0000 UTC m=+121.351647054" Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.625042 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" event={"ID":"254976ad-3d5f-484e-b3ec-4dbc14567032","Type":"ContainerStarted","Data":"23cd14abdb4eeaa5762009018d53ce3e4d646d6e71637ff87820cfb2c5928bcc"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.635200 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" event={"ID":"ff5aaf4d-9812-4773-bcd9-a6901952e242","Type":"ContainerStarted","Data":"8dce5be43326638a13bc7c65a245039ef1449ba9374be3812e6902580659403d"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.641395 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" event={"ID":"a92e1ebb-86ac-4456-873b-ce575e9cda12","Type":"ContainerStarted","Data":"0660d945b67828c2b9ad9d454b983a618eee49f0c50297abea9118ca4c53a109"} Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.698731 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.699879 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.199866699 +0000 UTC m=+121.932205908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.799768 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.799953 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.29992114 +0000 UTC m=+122.032260359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.800309 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.800659 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.300651869 +0000 UTC m=+122.032991088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.902606 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5104]: E0130 00:12:20.904057 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.40402037 +0000 UTC m=+122.136359589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.988153 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:20 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:20 crc kubenswrapper[5104]: I0130 00:12:20.988226 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.008293 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.009012 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.508999964 +0000 UTC m=+122.241339183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.011939 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.016513 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz"] Jan 30 00:12:21 crc kubenswrapper[5104]: W0130 00:12:21.040752 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f128e0_a6da_409d_9937_dc7f8b000da0.slice/crio-31d071cc6a2990d8b046a07f31b22bcd7079e2f4f5ec39eb510d96cdfa48ff6f WatchSource:0}: Error finding container 31d071cc6a2990d8b046a07f31b22bcd7079e2f4f5ec39eb510d96cdfa48ff6f: Status 404 returned error can't find the container with id 31d071cc6a2990d8b046a07f31b22bcd7079e2f4f5ec39eb510d96cdfa48ff6f Jan 30 00:12:21 crc kubenswrapper[5104]: W0130 00:12:21.046960 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd559e43d_60f9_4f29_8d4e_c595cad2bd22.slice/crio-143295d7d46a03e7d18643b1510da454dd0277fff309c561ea7fa41d9b302559 WatchSource:0}: Error finding container 143295d7d46a03e7d18643b1510da454dd0277fff309c561ea7fa41d9b302559: Status 404 returned error can't find the container with id 143295d7d46a03e7d18643b1510da454dd0277fff309c561ea7fa41d9b302559 Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.109302 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.109464 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.609431835 +0000 UTC m=+122.341771054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.109608 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.110067 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.610052912 +0000 UTC m=+122.342392131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.211028 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.211199 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.711172651 +0000 UTC m=+122.443511890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.211403 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.211830 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.711812548 +0000 UTC m=+122.444151777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.237128 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgflv"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.259521 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.314873 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.315165 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.815149178 +0000 UTC m=+122.547488407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.334113 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-862pd"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.340720 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.349285 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-72hww"] Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.420974 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.421367 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.921350844 +0000 UTC m=+122.653690063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.521725 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.521915 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.021843647 +0000 UTC m=+122.754182866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.522291 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.522551 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.022540336 +0000 UTC m=+122.754879555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.624130 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.624332 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.124303662 +0000 UTC m=+122.856642881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.625088 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.625379 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.125372102 +0000 UTC m=+122.857711321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.647310 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" event={"ID":"a15215a1-cfcc-4601-9e8c-1726c1837773","Type":"ContainerStarted","Data":"04d6288c0b2ba4014dd3bdb84530e80b93e609fcf23032e7f29d549f68fa02ca"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.648210 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-862pd" event={"ID":"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c","Type":"ContainerStarted","Data":"2ed8e0e7be79798e65fea83d23ed2c2c0c83120f177b0b05b7c62742e3f63807"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.649176 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" event={"ID":"43bd3b33-35f9-480e-9425-26cc2318094f","Type":"ContainerStarted","Data":"58e82b253849134ec372f2bfd32ec725e3fce7e4c97db749aaf4cb0204777e40"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.650837 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" event={"ID":"512ba09a-c537-4c10-86c4-6226498ce0e0","Type":"ContainerStarted","Data":"a4a0fcbc17eb1ba0b3c6a46651c16ac38a8793010f869bc34c56411246103df9"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.651899 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" event={"ID":"d559e43d-60f9-4f29-8d4e-c595cad2bd22","Type":"ContainerStarted","Data":"143295d7d46a03e7d18643b1510da454dd0277fff309c561ea7fa41d9b302559"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.653410 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" event={"ID":"de15ba83-bde1-43f2-b924-65926e8a4565","Type":"ContainerStarted","Data":"dba3d5e1721c7cbb03aca8ac0ebf447f7ac1c08b92c9933b65122b44993b3083"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.654363 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" event={"ID":"e47c44fb-6570-4180-9ce2-311e50c7956c","Type":"ContainerStarted","Data":"ca7cdc32eae735049c2b8108c6b7e89c8f64d4190274501b7f9efc42fe2e90f5"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.655402 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72hww" event={"ID":"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2","Type":"ContainerStarted","Data":"deb64a7f562d80c34578bc699783f250c14459af7c91d96bf88333e3f2243f7b"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.656150 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhqhg" event={"ID":"69495734-d360-40f9-bf1c-98a808e7f987","Type":"ContainerStarted","Data":"1734deb040199ee2b8074bca604b0b0188413aacce3295d8d0a55c569c94cdb7"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.657743 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" event={"ID":"254976ad-3d5f-484e-b3ec-4dbc14567032","Type":"ContainerStarted","Data":"5b89b18c1f207bb1df7e89766ffec51801aefa1335d61fba88d58076906735e3"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.658755 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-xkl2m" podStartSLOduration=98.658746243 podStartE2EDuration="1m38.658746243s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.656330247 +0000 UTC m=+122.388669466" watchObservedRunningTime="2026-01-30 00:12:21.658746243 +0000 UTC m=+122.391085462" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.659175 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fgflv" event={"ID":"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6","Type":"ContainerStarted","Data":"20d52af5739f4533321beb94ea8ead8322063e928e17d9c41134c1bb61d7cd59"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.660818 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvjb6" event={"ID":"8549d8ab-08fd-4d10-b03e-d162d745184a","Type":"ContainerStarted","Data":"68f2a2b0bbc7cd2916f2b995203292a6e5574cf27ef47038173319083ad77ae6"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.661512 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" event={"ID":"2c01d6b4-a210-4e12-bb14-2694d7e41659","Type":"ContainerStarted","Data":"35db7c451551240b653672a36eb62ed1696bb2e83be95be185aef2947f9da888"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.663508 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" event={"ID":"994594a7-ccc0-4f06-84ca-89f4e3561a2f","Type":"ContainerStarted","Data":"d141ee831c71f6713f80dfc27d9263667d51649410646942f51b3bd90f15db2f"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.665442 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" event={"ID":"5cd08e97-a118-4f88-b699-2a0bb507b241","Type":"ContainerStarted","Data":"b5550074a09439a50e88e27897fd44b10d8e55bf853f70b17dddf3aac6910308"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.666217 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerStarted","Data":"31d071cc6a2990d8b046a07f31b22bcd7079e2f4f5ec39eb510d96cdfa48ff6f"} Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.696663 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" podStartSLOduration=98.696643095 podStartE2EDuration="1m38.696643095s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.69644386 +0000 UTC m=+122.428783089" watchObservedRunningTime="2026-01-30 00:12:21.696643095 +0000 UTC m=+122.428982314" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.726004 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.726488 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.22646433 +0000 UTC m=+122.958803569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.769052 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.770682 5104 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-c5tsr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.770717 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.794990 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jn5m9" podStartSLOduration=98.794975879 podStartE2EDuration="1m38.794975879s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.792880603 +0000 UTC m=+122.525219822" watchObservedRunningTime="2026-01-30 00:12:21.794975879 +0000 UTC m=+122.527315098" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.825614 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-kzx6r" podStartSLOduration=98.825600146 podStartE2EDuration="1m38.825600146s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.823666845 +0000 UTC m=+122.556006054" watchObservedRunningTime="2026-01-30 00:12:21.825600146 +0000 UTC m=+122.557939355" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.828436 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.829112 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.329099311 +0000 UTC m=+123.061438530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.846796 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7pgw4" podStartSLOduration=98.846776088 podStartE2EDuration="1m38.846776088s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.836356637 +0000 UTC m=+122.568695856" watchObservedRunningTime="2026-01-30 00:12:21.846776088 +0000 UTC m=+122.579115317" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.854806 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r9m28" podStartSLOduration=98.854789285 podStartE2EDuration="1m38.854789285s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.853274203 +0000 UTC m=+122.585613422" watchObservedRunningTime="2026-01-30 00:12:21.854789285 +0000 UTC m=+122.587128504" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.935134 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.935345 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.435284207 +0000 UTC m=+123.167623436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.935665 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:21 crc kubenswrapper[5104]: E0130 00:12:21.935966 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.435953806 +0000 UTC m=+123.168293025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.941887 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" podStartSLOduration=98.941876655 podStartE2EDuration="1m38.941876655s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.919021788 +0000 UTC m=+122.651361027" watchObservedRunningTime="2026-01-30 00:12:21.941876655 +0000 UTC m=+122.674215874" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.941994 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-m6rzk" podStartSLOduration=98.941991088 podStartE2EDuration="1m38.941991088s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.940652482 +0000 UTC m=+122.672991701" watchObservedRunningTime="2026-01-30 00:12:21.941991088 +0000 UTC m=+122.674330307" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.982623 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x97vp" podStartSLOduration=98.982606564 podStartE2EDuration="1m38.982606564s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.981890335 +0000 UTC m=+122.714229554" watchObservedRunningTime="2026-01-30 00:12:21.982606564 +0000 UTC m=+122.714945783" Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.988639 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:21 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:21 crc kubenswrapper[5104]: I0130 00:12:21.988955 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.018207 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pzv69" podStartSLOduration=99.018179965 podStartE2EDuration="1m39.018179965s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.015994745 +0000 UTC m=+122.748333964" watchObservedRunningTime="2026-01-30 00:12:22.018179965 +0000 UTC m=+122.750519184" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.037111 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.037578 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.537562868 +0000 UTC m=+123.269902087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.056960 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6zt5n" podStartSLOduration=99.056941711 podStartE2EDuration="1m39.056941711s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.050478467 +0000 UTC m=+122.782817686" watchObservedRunningTime="2026-01-30 00:12:22.056941711 +0000 UTC m=+122.789280930" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.138654 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.139185 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.639134449 +0000 UTC m=+123.371473678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.240221 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.240661 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.740632769 +0000 UTC m=+123.472971988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.335097 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.342072 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.342433 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.842420807 +0000 UTC m=+123.574760026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.443494 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.443933 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.943908276 +0000 UTC m=+123.676247495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.545209 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.545669 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.045643622 +0000 UTC m=+123.777982841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.569101 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.646389 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.646550 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.146516375 +0000 UTC m=+123.878855604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.646957 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.647418 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.14740249 +0000 UTC m=+123.879741699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.681306 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" event={"ID":"3f7789da-fc14-4144-8d2e-44a08ce5dd85","Type":"ContainerStarted","Data":"84b4a680fe3881a975d3235c0a81f4dce024f1d844b5cbd0a43db006cb252067"} Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.689856 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" event={"ID":"aaa499cf-4449-4b32-9182-39c7d73cf064","Type":"ContainerStarted","Data":"c82a9f48860115b593cbe2f4922805ec5c0f5949493cb187858dbdb7d74e8559"} Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.693930 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" event={"ID":"1bef0b46-9def-441e-88e8-f481e45026da","Type":"ContainerStarted","Data":"a5d2c4b1a0cd71395eebb661c4fd915d1413208ae24d36197f3e9a69dd047841"} Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.702303 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" event={"ID":"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909","Type":"ContainerStarted","Data":"bc104ddc0719ed9518e33bbaa709b5867da099ee2ea0601164a3064592521317"} Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.704951 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" event={"ID":"51278e19-fef3-4056-bdb6-f9f60f3a65e0","Type":"ContainerStarted","Data":"426cfd9a6e9527f3e58c024d7c9bea870f3e9ff0752c947abe00ccdbba8ed91e"} Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.748162 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.748533 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.248507689 +0000 UTC m=+123.980846908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.850214 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.850518 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.350504952 +0000 UTC m=+124.082844171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.890103 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.905511 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" podStartSLOduration=99.905488066 podStartE2EDuration="1m39.905488066s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.903111222 +0000 UTC m=+123.635450461" watchObservedRunningTime="2026-01-30 00:12:22.905488066 +0000 UTC m=+123.637827285" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.951021 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.951267 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.45121892 +0000 UTC m=+124.183558139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.951869 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:22 crc kubenswrapper[5104]: E0130 00:12:22.952777 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.452764792 +0000 UTC m=+124.185104011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.952787 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gvjb6" podStartSLOduration=99.952769342 podStartE2EDuration="1m39.952769342s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.949772032 +0000 UTC m=+123.682111261" watchObservedRunningTime="2026-01-30 00:12:22.952769342 +0000 UTC m=+123.685108561" Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.987132 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:22 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:22 crc kubenswrapper[5104]: I0130 00:12:22.987202 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.004983 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vklm2" podStartSLOduration=100.004967151 podStartE2EDuration="1m40.004967151s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.003931213 +0000 UTC m=+123.736270452" watchObservedRunningTime="2026-01-30 00:12:23.004967151 +0000 UTC m=+123.737306370" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.053603 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.053703 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.553687317 +0000 UTC m=+124.286026536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.055750 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.058003 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.557990522 +0000 UTC m=+124.290329731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.058491 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-896z6" podStartSLOduration=100.058474925 podStartE2EDuration="1m40.058474925s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.03568869 +0000 UTC m=+123.768027909" watchObservedRunningTime="2026-01-30 00:12:23.058474925 +0000 UTC m=+123.790814144" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.058913 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" podStartSLOduration=100.058909728 podStartE2EDuration="1m40.058909728s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.056548344 +0000 UTC m=+123.788887563" watchObservedRunningTime="2026-01-30 00:12:23.058909728 +0000 UTC m=+123.791248947" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.159436 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.167071 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.667034946 +0000 UTC m=+124.399374175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.167311 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.167740 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.667729744 +0000 UTC m=+124.400068963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.204392 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.204684 5104 patch_prober.go:28] interesting pod/console-operator-67c89758df-g4wlb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.204737 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" podUID="a92e1ebb-86ac-4456-873b-ce575e9cda12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.237589 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.240386 5104 patch_prober.go:28] interesting pod/downloads-747b44746d-kzx6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.240443 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-kzx6r" podUID="6fd43d75-51fe-42d6-9f2a-adbe6045f25c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.268314 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.268691 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.768663409 +0000 UTC m=+124.501002648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.269494 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.272204 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.772194135 +0000 UTC m=+124.504533354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.370483 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.370704 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.870671583 +0000 UTC m=+124.603010802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.370835 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.371198 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.871188657 +0000 UTC m=+124.603527876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.473253 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.473543 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.973526149 +0000 UTC m=+124.705865368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.506143 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.507800 5104 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-v56dx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.507876 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" podUID="512ba09a-c537-4c10-86c4-6226498ce0e0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.574570 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.574911 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.074898216 +0000 UTC m=+124.807237435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.675550 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.675929 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.175914412 +0000 UTC m=+124.908253621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.716779 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t2sbz" event={"ID":"73af8639-1dae-4861-8165-94c6c5410e1b","Type":"ContainerStarted","Data":"81cbb1a41ee64434272fce7fd77ed1f572fe02a322a4e462d0493e556e9126fe"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.720269 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" event={"ID":"cd4db4af-c1ef-4771-88bf-d372af1849fa","Type":"ContainerStarted","Data":"2feff20d7142d1c279baf50a0a679ae0f44d3d2544464a5f6b6a07a707bd4147"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.724718 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" event={"ID":"10904391-ad3c-46eb-8147-c32c0612487c","Type":"ContainerStarted","Data":"bb0d6f203014c3847cb0a9f7f82b44a715370f1e5b9b781e5a7bf72c56170b2c"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.731239 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" event={"ID":"9eebb380-6a1e-49b2-bd63-222bc499058b","Type":"ContainerStarted","Data":"7d68e996747c3dde1b5ba23b0753a0e14fec83ede7c26403a5aa586c4800d1af"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.733546 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" event={"ID":"3b2a92e1-d95a-4a3e-a07e-62e5100931bb","Type":"ContainerStarted","Data":"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.735637 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" event={"ID":"c1186351-b63f-4a39-b8e6-e01f0b686544","Type":"ContainerStarted","Data":"7c790473ba174bb156f3a40957b5e69d8bdbfd75ba0320c71f8a75c940cba47a"} Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.740705 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qtds2" podStartSLOduration=100.74069364100001 podStartE2EDuration="1m40.740693641s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.259536033 +0000 UTC m=+123.991875252" watchObservedRunningTime="2026-01-30 00:12:23.740693641 +0000 UTC m=+124.473032860" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.741530 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-t2sbz" podStartSLOduration=8.741524983 podStartE2EDuration="8.741524983s" podCreationTimestamp="2026-01-30 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.740447084 +0000 UTC m=+124.472786313" watchObservedRunningTime="2026-01-30 00:12:23.741524983 +0000 UTC m=+124.473864202" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.777365 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.777706 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.277690739 +0000 UTC m=+125.010029958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.878317 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.878518 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.37849175 +0000 UTC m=+125.110830969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.879062 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.879406 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.379399004 +0000 UTC m=+125.111738223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.896751 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9lr7t" podStartSLOduration=100.896731883 podStartE2EDuration="1m40.896731883s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.880590667 +0000 UTC m=+124.612929886" watchObservedRunningTime="2026-01-30 00:12:23.896731883 +0000 UTC m=+124.629071102" Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.980588 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5104]: E0130 00:12:23.981427 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.481395868 +0000 UTC m=+125.213735087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.987793 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:23 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:23 crc kubenswrapper[5104]: I0130 00:12:23.987878 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.082064 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.082549 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.582523438 +0000 UTC m=+125.314862687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.157800 5104 patch_prober.go:28] interesting pod/console-operator-67c89758df-g4wlb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.157836 5104 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-v56dx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.157884 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" podUID="a92e1ebb-86ac-4456-873b-ce575e9cda12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.157932 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" podUID="512ba09a-c537-4c10-86c4-6226498ce0e0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.158203 5104 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pdjtd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.158282 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" podUID="51278e19-fef3-4056-bdb6-f9f60f3a65e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.158341 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.173263 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-bn2ph" podStartSLOduration=100.173252696 podStartE2EDuration="1m40.173252696s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:24.172412394 +0000 UTC m=+124.904751603" watchObservedRunningTime="2026-01-30 00:12:24.173252696 +0000 UTC m=+124.905591915" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.174423 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" podStartSLOduration=100.174417268 podStartE2EDuration="1m40.174417268s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.896582009 +0000 UTC m=+124.628921258" watchObservedRunningTime="2026-01-30 00:12:24.174417268 +0000 UTC m=+124.906756487" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.182817 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.183204 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.683191635 +0000 UTC m=+125.415530854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.215803 5104 patch_prober.go:28] interesting pod/downloads-747b44746d-kzx6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.215892 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-kzx6r" podUID="6fd43d75-51fe-42d6-9f2a-adbe6045f25c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.219582 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.259797 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.259993 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.262820 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.276236 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.289903 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.291675 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.791659333 +0000 UTC m=+125.523998552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.391461 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.391652 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.391723 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.391879 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.891840667 +0000 UTC m=+125.624179876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.493421 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.493511 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.493587 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.493604 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.493967 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.993944973 +0000 UTC m=+125.726284192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.513275 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.595077 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.595277 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.095244908 +0000 UTC m=+125.827584137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.595577 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.595954 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.095938106 +0000 UTC m=+125.828277325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.612132 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.696408 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.696616 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.196597363 +0000 UTC m=+125.928936582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.697003 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.697306 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.197297362 +0000 UTC m=+125.929636581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.741523 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerStarted","Data":"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0"} Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.743256 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" event={"ID":"06fbf10a-e423-4033-b4cb-ff77c12973d7","Type":"ContainerStarted","Data":"6bedce276bf9c855a7c9042660338d0cb79147e0ca8f9251941e2a419355f4fd"} Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.744259 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" event={"ID":"d559e43d-60f9-4f29-8d4e-c595cad2bd22","Type":"ContainerStarted","Data":"73fda043ea6455b150d579fdb26e47075828e08d944332ac92af9a67274a1c4a"} Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.745237 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhqhg" event={"ID":"69495734-d360-40f9-bf1c-98a808e7f987","Type":"ContainerStarted","Data":"3f139c35e711e471550b04fc12f01dfd41b2d13216762ae1034ccca3fc804792"} Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.746909 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" event={"ID":"254976ad-3d5f-484e-b3ec-4dbc14567032","Type":"ContainerStarted","Data":"4dc6e4ef4e7bfc309d76c018f1da4e8a1a8d52cd89e61d707f315742e0cbfdc1"} Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.791634 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.799002 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.799492 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.299431779 +0000 UTC m=+126.031771018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.863927 5104 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-v56dx container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.864015 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" podUID="512ba09a-c537-4c10-86c4-6226498ce0e0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.901977 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:24 crc kubenswrapper[5104]: E0130 00:12:24.902417 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.402396309 +0000 UTC m=+126.134735548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.925318 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.928936 5104 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mb4lh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.929179 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.986555 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:24 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:24 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:24 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:24 crc kubenswrapper[5104]: I0130 00:12:24.986712 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.002602 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.003629 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.503605821 +0000 UTC m=+126.235945030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.104789 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.105149 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.605133121 +0000 UTC m=+126.337472340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.206862 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.207082 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.707053462 +0000 UTC m=+126.439392681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.207592 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.208080 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.708061599 +0000 UTC m=+126.440400818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.308996 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.309201 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.809166008 +0000 UTC m=+126.541505227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.309576 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.309928 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.809911678 +0000 UTC m=+126.542250897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.411602 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.411840 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.911800258 +0000 UTC m=+126.644139477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.412005 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.412390 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.912373174 +0000 UTC m=+126.644712393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.513174 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.513430 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.013392331 +0000 UTC m=+126.745731570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.513628 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.514154 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.014137901 +0000 UTC m=+126.746477140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.553191 5104 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-pdjtd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.553254 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" podUID="51278e19-fef3-4056-bdb6-f9f60f3a65e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.575519 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" podStartSLOduration=101.575490337 podStartE2EDuration="1m41.575490337s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:24.986240272 +0000 UTC m=+125.718579501" watchObservedRunningTime="2026-01-30 00:12:25.575490337 +0000 UTC m=+126.307829576" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.575996 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vhqhg" podStartSLOduration=10.57598617 podStartE2EDuration="10.57598617s" podCreationTimestamp="2026-01-30 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.571397766 +0000 UTC m=+126.303737005" watchObservedRunningTime="2026-01-30 00:12:25.57598617 +0000 UTC m=+126.308325409" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.577955 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.579196 5104 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-grfh9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.579261 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" podUID="cd4db4af-c1ef-4771-88bf-d372af1849fa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.614426 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" podStartSLOduration=101.614402478 podStartE2EDuration="1m41.614402478s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.610944434 +0000 UTC m=+126.343283663" watchObservedRunningTime="2026-01-30 00:12:25.614402478 +0000 UTC m=+126.346741717" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.616492 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.618413 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.116644928 +0000 UTC m=+126.848984147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.619620 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.621197 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.12118606 +0000 UTC m=+126.853525279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.630497 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.639358 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-gkzts" podStartSLOduration=102.638818007 podStartE2EDuration="1m42.638818007s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.632584338 +0000 UTC m=+126.364923547" watchObservedRunningTime="2026-01-30 00:12:25.638818007 +0000 UTC m=+126.371157226" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.656730 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" podStartSLOduration=10.656708429 podStartE2EDuration="10.656708429s" podCreationTimestamp="2026-01-30 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.652190207 +0000 UTC m=+126.384529426" watchObservedRunningTime="2026-01-30 00:12:25.656708429 +0000 UTC m=+126.389047648" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.677561 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.700635 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.700697 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.703808 5104 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-d9wqk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.703884 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" podUID="1bef0b46-9def-441e-88e8-f481e45026da" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.722158 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.722359 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.22233602 +0000 UTC m=+126.954675239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.722552 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.723028 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.223008349 +0000 UTC m=+126.955347568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.776379 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72hww" event={"ID":"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2","Type":"ContainerStarted","Data":"c1214817daf5d5e5bb85516b4bf3357a4eb799529902ca01ce026db34e28228d"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.779071 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"aac746ff-39ab-49d5-9540-fc59eadfed37","Type":"ContainerStarted","Data":"02e917e70cb0aa9e758cd2c7d7ad340dddc1adf287c01aa8afe73268df00d2b3"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.783922 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fgflv" event={"ID":"6fa87bbe-4a1f-4c3e-a2cb-5c6f6d9440a6","Type":"ContainerStarted","Data":"51e0c3d38bc9750b8339782c3a67760a7b3aa0d5dbf83040bb50611d61a76a74"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.789073 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" event={"ID":"2c01d6b4-a210-4e12-bb14-2694d7e41659","Type":"ContainerStarted","Data":"352ebcc14d3a2a778f6ddea87315fcd815c486cba35ea2fb97fce2cc24a1a503"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.801224 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" event={"ID":"ac96d3d5-fde2-4526-9d1d-ed33ebf8a909","Type":"ContainerStarted","Data":"b7dac110f3a0560a97b606a727d937413d9a14d99bc56ed1ad584662a22ba134"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.809351 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" event={"ID":"43bd3b33-35f9-480e-9425-26cc2318094f","Type":"ContainerStarted","Data":"dfd815d92628999d0dca12041b2162a70f293994d94fec3e7286044f9b66a9c1"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.824317 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.827719 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.327693985 +0000 UTC m=+127.060033204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.828980 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" event={"ID":"aaa499cf-4449-4b32-9182-39c7d73cf064","Type":"ContainerStarted","Data":"e2ccf54510921a05be9cd0226a51c7067e0829ff6303a30479332b7098e48d9e"} Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.829561 5104 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mb4lh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.829596 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.830530 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.830775 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.831823 5104 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-grfh9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.831869 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" podUID="cd4db4af-c1ef-4771-88bf-d372af1849fa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.832872 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-fgflv" podStartSLOduration=101.832841214 podStartE2EDuration="1m41.832841214s" podCreationTimestamp="2026-01-30 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.805241119 +0000 UTC m=+126.537580358" watchObservedRunningTime="2026-01-30 00:12:25.832841214 +0000 UTC m=+126.565180433" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.833161 5104 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-jcrzz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.833189 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" podUID="d559e43d-60f9-4f29-8d4e-c595cad2bd22" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.834132 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dmtb5" podStartSLOduration=102.834124588 podStartE2EDuration="1m42.834124588s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.830558322 +0000 UTC m=+126.562897541" watchObservedRunningTime="2026-01-30 00:12:25.834124588 +0000 UTC m=+126.566463807" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.870254 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sdlg2" podStartSLOduration=102.870237703 podStartE2EDuration="1m42.870237703s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.868369372 +0000 UTC m=+126.600708601" watchObservedRunningTime="2026-01-30 00:12:25.870237703 +0000 UTC m=+126.602576962" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.911570 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" podStartSLOduration=102.911555489 podStartE2EDuration="1m42.911555489s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.910558032 +0000 UTC m=+126.642897241" watchObservedRunningTime="2026-01-30 00:12:25.911555489 +0000 UTC m=+126.643894708" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.911663 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" podStartSLOduration=102.911658681 podStartE2EDuration="1m42.911658681s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.892091323 +0000 UTC m=+126.624430542" watchObservedRunningTime="2026-01-30 00:12:25.911658681 +0000 UTC m=+126.643997900" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.929609 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:25 crc kubenswrapper[5104]: E0130 00:12:25.931392 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.431375063 +0000 UTC m=+127.163714282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.948862 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" podStartSLOduration=102.948832585 podStartE2EDuration="1m42.948832585s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.947350435 +0000 UTC m=+126.679689654" watchObservedRunningTime="2026-01-30 00:12:25.948832585 +0000 UTC m=+126.681171804" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.983838 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.986285 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:25 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:25 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:25 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:25 crc kubenswrapper[5104]: I0130 00:12:25.986343 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.031887 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.031997 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.531977659 +0000 UTC m=+127.264316878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.032586 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.032965 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.532957335 +0000 UTC m=+127.265296554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.133422 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.133817 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.633801297 +0000 UTC m=+127.366140516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.234891 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.235203 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.735190934 +0000 UTC m=+127.467530153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.336232 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.336500 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.836467629 +0000 UTC m=+127.568806848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.337012 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.337304 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.837292011 +0000 UTC m=+127.569631230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.378362 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2f4tq"] Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.438768 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.439123 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.939101718 +0000 UTC m=+127.671440937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.540483 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.540915 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.040900187 +0000 UTC m=+127.773239406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.641556 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.641731 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.141701698 +0000 UTC m=+127.874040917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.642156 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.642494 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.142481849 +0000 UTC m=+127.874821068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.743364 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.743484 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.243466494 +0000 UTC m=+127.975805713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.744413 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.744693 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.244685807 +0000 UTC m=+127.977025026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.835046 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-72hww" event={"ID":"70d57bd5-86d2-4a87-baac-b6c03e6b5cb2","Type":"ContainerStarted","Data":"c7dce216e84aacb03b39de4e250c15d108a5cf144a30c28ca7b1ca8a229b88bc"} Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.835211 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-72hww" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.837804 5104 generic.go:358] "Generic (PLEG): container finished" podID="aac746ff-39ab-49d5-9540-fc59eadfed37" containerID="d9f8427020cbb93f63f24e6d91485d42f15c394b60a5902aac7466fa69e4e2dc" exitCode=0 Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.837896 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"aac746ff-39ab-49d5-9540-fc59eadfed37","Type":"ContainerDied","Data":"d9f8427020cbb93f63f24e6d91485d42f15c394b60a5902aac7466fa69e4e2dc"} Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.839581 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" event={"ID":"10904391-ad3c-46eb-8147-c32c0612487c","Type":"ContainerStarted","Data":"0b378c0796a8063b2d3dac1ae1fadbc28b16586dfe0da31704c3bca3a041eb1a"} Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.842326 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" event={"ID":"c1186351-b63f-4a39-b8e6-e01f0b686544","Type":"ContainerStarted","Data":"2793ec5301a7c7c18eaadd308ca9e466f8a836d7a19b848b9383e1d86d1aa4ae"} Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.844576 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-862pd" event={"ID":"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c","Type":"ContainerStarted","Data":"21863b4b1dc8729f56cd090539ea76ad4906076f15ed4c81ff3aaae969179b15"} Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.845386 5104 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-jcrzz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.845433 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" podUID="d559e43d-60f9-4f29-8d4e-c595cad2bd22" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.846364 5104 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mb4lh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.846485 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.846673 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.847342 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.347325328 +0000 UTC m=+128.079664547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.870020 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-72hww" podStartSLOduration=11.87000432 podStartE2EDuration="11.87000432s" podCreationTimestamp="2026-01-30 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.862034625 +0000 UTC m=+127.594373844" watchObservedRunningTime="2026-01-30 00:12:26.87000432 +0000 UTC m=+127.602343529" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.871323 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.876154 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.878253 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.878376 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.884699 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.948468 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.948717 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.949132 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:26 crc kubenswrapper[5104]: E0130 00:12:26.950729 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.450712529 +0000 UTC m=+128.183051748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.974028 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-tgcbf" podStartSLOduration=103.974013357 podStartE2EDuration="1m43.974013357s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.972376694 +0000 UTC m=+127.704715913" watchObservedRunningTime="2026-01-30 00:12:26.974013357 +0000 UTC m=+127.706352576" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.989025 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:26 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:26 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:26 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.989350 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:26 crc kubenswrapper[5104]: I0130 00:12:26.996646 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-wmdwr" podStartSLOduration=103.996609577 podStartE2EDuration="1m43.996609577s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.993902064 +0000 UTC m=+127.726241273" watchObservedRunningTime="2026-01-30 00:12:26.996609577 +0000 UTC m=+127.728948796" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.049943 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.050050 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.550033419 +0000 UTC m=+128.282372628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.050135 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.050179 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.050224 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.050317 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.050823 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.550803741 +0000 UTC m=+128.283142970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.084618 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.151069 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.151225 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.65119838 +0000 UTC m=+128.383537599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.151538 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.151803 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.651796907 +0000 UTC m=+128.384136126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.164675 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-v56dx" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.250793 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.253482 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.253925 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.753878122 +0000 UTC m=+128.486217341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.354682 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.355055 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.855039002 +0000 UTC m=+128.587378221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.363805 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.363843 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.397920 5104 patch_prober.go:28] interesting pod/downloads-747b44746d-kzx6r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.398182 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-kzx6r" podUID="6fd43d75-51fe-42d6-9f2a-adbe6045f25c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.455823 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.456702 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.956686296 +0000 UTC m=+128.689025515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.534262 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.551223 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.557641 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.557961 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.05794734 +0000 UTC m=+128.790286559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.659087 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.659384 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.159369147 +0000 UTC m=+128.891708366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.692088 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.692128 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.694732 5104 patch_prober.go:28] interesting pod/console-64d44f6ddf-m6rzk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.694777 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-m6rzk" podUID="a1f8c00b-3459-4b15-ab8c-52407669c50a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.760369 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.761577 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.261565386 +0000 UTC m=+128.993904605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.849937 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a324941-1096-49b3-a2ef-55df038bf42c","Type":"ContainerStarted","Data":"305b5f4b27d1d11538e107690c8a4a81220b001c7b66133a1266de5dec2d651d"} Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.851694 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" gracePeriod=30 Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.857996 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-675xg" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.861365 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.861534 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.361506494 +0000 UTC m=+129.093845713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.862484 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.863043 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.363035175 +0000 UTC m=+129.095374394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.945366 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45690: no serving certificate available for the kubelet" Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.963694 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.963920 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.463877927 +0000 UTC m=+129.196217146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.964066 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:27 crc kubenswrapper[5104]: E0130 00:12:27.964478 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.464471303 +0000 UTC m=+129.196810512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.986083 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:27 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:27 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:27 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:27 crc kubenswrapper[5104]: I0130 00:12:27.986148 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.045725 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45706: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.066549 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.066934 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.566900058 +0000 UTC m=+129.299239277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.067138 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.067413 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.567400031 +0000 UTC m=+129.299739250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.147626 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.157221 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45716: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169008 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir\") pod \"aac746ff-39ab-49d5-9540-fc59eadfed37\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169070 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access\") pod \"aac746ff-39ab-49d5-9540-fc59eadfed37\" (UID: \"aac746ff-39ab-49d5-9540-fc59eadfed37\") " Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169142 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aac746ff-39ab-49d5-9540-fc59eadfed37" (UID: "aac746ff-39ab-49d5-9540-fc59eadfed37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169195 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.169340 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.669306862 +0000 UTC m=+129.401646091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169508 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.169781 5104 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac746ff-39ab-49d5-9540-fc59eadfed37-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.169900 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.669884568 +0000 UTC m=+129.402223787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.181774 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aac746ff-39ab-49d5-9540-fc59eadfed37" (UID: "aac746ff-39ab-49d5-9540-fc59eadfed37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.247221 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45722: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.271392 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.271594 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.771560452 +0000 UTC m=+129.503899671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.271699 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.271840 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac746ff-39ab-49d5-9540-fc59eadfed37-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.272171 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.772151619 +0000 UTC m=+129.504490838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.351108 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45726: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.373101 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.373715 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.873699549 +0000 UTC m=+129.606038768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.381684 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45736: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.491351 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.494752 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.994734936 +0000 UTC m=+129.727074155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.499351 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45748: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.595786 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.595984 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.095954758 +0000 UTC m=+129.828293977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.596251 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.596571 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.096557154 +0000 UTC m=+129.828896373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.657492 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.658511 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aac746ff-39ab-49d5-9540-fc59eadfed37" containerName="pruner" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.658535 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac746ff-39ab-49d5-9540-fc59eadfed37" containerName="pruner" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.658638 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="aac746ff-39ab-49d5-9540-fc59eadfed37" containerName="pruner" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.668410 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.668583 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.670647 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.697802 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.697988 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.197960922 +0000 UTC m=+129.930300141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.698140 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.698634 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.19862606 +0000 UTC m=+129.930965279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.798950 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.799129 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.299102532 +0000 UTC m=+130.031441751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.799495 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.799593 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44xq\" (UniqueName: \"kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.799617 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.799694 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.800007 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.299996026 +0000 UTC m=+130.032335245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.839311 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45762: no serving certificate available for the kubelet" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.855552 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.861009 5104 generic.go:358] "Generic (PLEG): container finished" podID="7a324941-1096-49b3-a2ef-55df038bf42c" containerID="161cce2a2d4786179beebaeb420e73f279781bb55eea5da23fed12b67fc1f2d2" exitCode=0 Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.863929 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"aac746ff-39ab-49d5-9540-fc59eadfed37","Type":"ContainerDied","Data":"02e917e70cb0aa9e758cd2c7d7ad340dddc1adf287c01aa8afe73268df00d2b3"} Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.863989 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e917e70cb0aa9e758cd2c7d7ad340dddc1adf287c01aa8afe73268df00d2b3" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.864015 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a324941-1096-49b3-a2ef-55df038bf42c","Type":"ContainerDied","Data":"161cce2a2d4786179beebaeb420e73f279781bb55eea5da23fed12b67fc1f2d2"} Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.864037 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-862pd" event={"ID":"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c","Type":"ContainerStarted","Data":"31da6e69421e14b9c17e0959f929c4bac6805fbac8d633d06eda9b2186c31cda"} Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.864201 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.864440 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.865635 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.866693 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.900659 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.900879 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.400833048 +0000 UTC m=+130.133172267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901077 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901148 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901252 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s44xq\" (UniqueName: \"kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901289 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901360 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mn29\" (UniqueName: \"kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:28 crc kubenswrapper[5104]: E0130 00:12:28.901434 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.401421434 +0000 UTC m=+130.133760653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901461 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901497 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901803 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.901935 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.936364 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44xq\" (UniqueName: \"kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq\") pod \"certified-operators-r6xks\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.987988 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:28 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:28 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:28 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:28 crc kubenswrapper[5104]: I0130 00:12:28.988125 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.002548 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.002650 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.002727 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.502699288 +0000 UTC m=+130.235038507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.002942 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.003025 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.003166 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mn29\" (UniqueName: \"kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.003485 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.003657 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.003742 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.503727476 +0000 UTC m=+130.236066695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.009340 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.034925 5104 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.035628 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mn29\" (UniqueName: \"kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29\") pod \"community-operators-kzfbd\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.054780 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.075988 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.076176 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.104006 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.104192 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.604162657 +0000 UTC m=+130.336501876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.104645 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.104939 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.604926987 +0000 UTC m=+130.337266206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.205754 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.206056 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.206346 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.206404 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.706389346 +0000 UTC m=+130.438728555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.206471 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrr6\" (UniqueName: \"kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.206510 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.206535 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.206758 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.706750336 +0000 UTC m=+130.439089555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.222630 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.252252 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.263982 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.264020 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307340 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307454 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307490 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307521 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307564 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307582 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69cg\" (UniqueName: \"kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.307646 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrr6\" (UniqueName: \"kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.308008 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.807993178 +0000 UTC m=+130.540332397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.308289 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.311245 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.346270 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrr6\" (UniqueName: \"kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6\") pod \"certified-operators-dvdcz\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.391989 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.408772 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.408812 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x69cg\" (UniqueName: \"kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.408871 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.408914 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.410415 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.411058 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.91104099 +0000 UTC m=+130.643380209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.411437 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.428599 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x69cg\" (UniqueName: \"kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg\") pod \"community-operators-dxc6g\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.510333 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.510730 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.010712991 +0000 UTC m=+130.743052210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.515554 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45776: no serving certificate available for the kubelet" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.593634 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.613829 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: E0130 00:12:29.614132 5104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.114118812 +0000 UTC m=+130.846458021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-lhbqs" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.679451 5104 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T00:12:29.034951279Z","UUID":"389d4e99-eada-4a1c-9824-a956faec45c3","Handler":null,"Name":"","Endpoint":""} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.692620 5104 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.692664 5104 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.715389 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.727547 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.783230 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.816568 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.824572 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.827247 5104 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.827290 5104 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.876726 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-lhbqs\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.891719 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerStarted","Data":"f161f79c2a616da69b79c68306f8271956c9d2d340268ef51b9e4e952d74b0dc"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.893108 5104 generic.go:358] "Generic (PLEG): container finished" podID="103981ae-943d-41ab-a2d1-9cafe7669187" containerID="c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639" exitCode=0 Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.894043 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerDied","Data":"c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.894065 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerStarted","Data":"3e5429138ed3b241e33c7b3d345094fd08b25a7a2e21537d0a0f7d6cbb1bab04"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.941286 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-862pd" event={"ID":"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c","Type":"ContainerStarted","Data":"429531688bc2b7897b7dbcfc64b0816472bb0c40c081a012cf70479596d0ab0d"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.941332 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-862pd" event={"ID":"6f4c6c29-73d8-44bd-8ab8-1bafe595cf8c","Type":"ContainerStarted","Data":"bdc903404b646a9485e1d8dda40c9b7c2b4f4e89f116a1ab68f55181ba1ee30a"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.943836 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.946833 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerStarted","Data":"6b891c25571f6725bc0ed860a5c10385a3fcb11fc867cadd5b0fe8b890dddb81"} Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.983996 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-862pd" podStartSLOduration=14.983977516 podStartE2EDuration="14.983977516s" podCreationTimestamp="2026-01-30 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.983679808 +0000 UTC m=+130.716019027" watchObservedRunningTime="2026-01-30 00:12:29.983977516 +0000 UTC m=+130.716316735" Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.996670 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:29 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:29 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:29 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:29 crc kubenswrapper[5104]: I0130 00:12:29.996726 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.112801 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.117547 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.211631 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.325124 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access\") pod \"7a324941-1096-49b3-a2ef-55df038bf42c\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.325520 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir\") pod \"7a324941-1096-49b3-a2ef-55df038bf42c\" (UID: \"7a324941-1096-49b3-a2ef-55df038bf42c\") " Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.326436 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a324941-1096-49b3-a2ef-55df038bf42c" (UID: "7a324941-1096-49b3-a2ef-55df038bf42c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.330310 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a324941-1096-49b3-a2ef-55df038bf42c" (UID: "7a324941-1096-49b3-a2ef-55df038bf42c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.398982 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:12:30 crc kubenswrapper[5104]: W0130 00:12:30.414762 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d2656d_a61b_4aaa_8860_225ca88ac6a7.slice/crio-2efce453c0d3442309138dc91cc60dc06ec515955ac1e677e2131dfa9aae88f1 WatchSource:0}: Error finding container 2efce453c0d3442309138dc91cc60dc06ec515955ac1e677e2131dfa9aae88f1: Status 404 returned error can't find the container with id 2efce453c0d3442309138dc91cc60dc06ec515955ac1e677e2131dfa9aae88f1 Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.427539 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a324941-1096-49b3-a2ef-55df038bf42c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.427572 5104 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a324941-1096-49b3-a2ef-55df038bf42c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.540199 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.656748 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.657525 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a324941-1096-49b3-a2ef-55df038bf42c" containerName="pruner" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.657551 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a324941-1096-49b3-a2ef-55df038bf42c" containerName="pruner" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.657678 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a324941-1096-49b3-a2ef-55df038bf42c" containerName="pruner" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.663980 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.676175 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.678008 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.709245 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.718934 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-d9wqk" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.730547 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.730616 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.730638 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvff\" (UniqueName: \"kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.826947 5104 ???:1] "http: TLS handshake error from 192.168.126.11:45780: no serving certificate available for the kubelet" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.831783 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.831926 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.831967 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvff\" (UniqueName: \"kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.832986 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.833276 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.875940 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvff\" (UniqueName: \"kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff\") pod \"redhat-marketplace-whc9q\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.954620 5104 generic.go:358] "Generic (PLEG): container finished" podID="43bd3b33-35f9-480e-9425-26cc2318094f" containerID="dfd815d92628999d0dca12041b2162a70f293994d94fec3e7286044f9b66a9c1" exitCode=0 Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.954769 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" event={"ID":"43bd3b33-35f9-480e-9425-26cc2318094f","Type":"ContainerDied","Data":"dfd815d92628999d0dca12041b2162a70f293994d94fec3e7286044f9b66a9c1"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.956437 5104 generic.go:358] "Generic (PLEG): container finished" podID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerID="75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f" exitCode=0 Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.956516 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerDied","Data":"75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.969464 5104 generic.go:358] "Generic (PLEG): container finished" podID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerID="7fb594e6cea1aab0bfc5d1d7da2cf17895207b0d375d4f8e9d329fc9da784a3c" exitCode=0 Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.969633 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerDied","Data":"7fb594e6cea1aab0bfc5d1d7da2cf17895207b0d375d4f8e9d329fc9da784a3c"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.969685 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerStarted","Data":"5ecd8e558146142479e9b9390cec4545f22e8b5e9c7fe458ec99c2cee7861a83"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.974098 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a324941-1096-49b3-a2ef-55df038bf42c","Type":"ContainerDied","Data":"305b5f4b27d1d11538e107690c8a4a81220b001c7b66133a1266de5dec2d651d"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.974135 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305b5f4b27d1d11538e107690c8a4a81220b001c7b66133a1266de5dec2d651d" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.974226 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.977934 5104 generic.go:358] "Generic (PLEG): container finished" podID="74720252-7847-489b-a755-3c27d70770f9" containerID="f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21" exitCode=0 Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.978011 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerDied","Data":"f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.989825 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" event={"ID":"40d2656d-a61b-4aaa-8860-225ca88ac6a7","Type":"ContainerStarted","Data":"23413498b03e80448a9eab6cf163532a15bffaa68e4449835b274c1e5994a24c"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.989876 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" event={"ID":"40d2656d-a61b-4aaa-8860-225ca88ac6a7","Type":"ContainerStarted","Data":"2efce453c0d3442309138dc91cc60dc06ec515955ac1e677e2131dfa9aae88f1"} Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.990169 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.990268 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.993497 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:30 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:30 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:30 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:30 crc kubenswrapper[5104]: I0130 00:12:30.993533 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.063651 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.095312 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.095422 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.135600 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.135974 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.136071 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mntz\" (UniqueName: \"kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.162534 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" podStartSLOduration=108.162517028 podStartE2EDuration="1m48.162517028s" podCreationTimestamp="2026-01-30 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:31.162021674 +0000 UTC m=+131.894360883" watchObservedRunningTime="2026-01-30 00:12:31.162517028 +0000 UTC m=+131.894856247" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.237075 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.237163 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.237180 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mntz\" (UniqueName: \"kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.237935 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.238182 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.261699 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mntz\" (UniqueName: \"kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz\") pod \"redhat-marketplace-fgbcw\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.420617 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.482034 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:12:31 crc kubenswrapper[5104]: W0130 00:12:31.488290 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded75038d_3a8a_493b_8fda_d9722d334034.slice/crio-c9e6f453e603b7995c75c51722cf529d20219169dafe11c18afa19332a303641 WatchSource:0}: Error finding container c9e6f453e603b7995c75c51722cf529d20219169dafe11c18afa19332a303641: Status 404 returned error can't find the container with id c9e6f453e603b7995c75c51722cf529d20219169dafe11c18afa19332a303641 Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.693318 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:12:31 crc kubenswrapper[5104]: W0130 00:12:31.751843 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda08f83ac_7f15_48c7_a0dd_406cbdb64831.slice/crio-be594ac218d66565c48ea4ba6fb03aee942da618c54be5b4e6d5adac551fbf83 WatchSource:0}: Error finding container be594ac218d66565c48ea4ba6fb03aee942da618c54be5b4e6d5adac551fbf83: Status 404 returned error can't find the container with id be594ac218d66565c48ea4ba6fb03aee942da618c54be5b4e6d5adac551fbf83 Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.985815 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:31 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:31 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:31 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:31 crc kubenswrapper[5104]: I0130 00:12:31.986381 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.004978 5104 generic.go:358] "Generic (PLEG): container finished" podID="ed75038d-3a8a-493b-8fda-d9722d334034" containerID="1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c" exitCode=0 Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.005124 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerDied","Data":"1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c"} Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.005200 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerStarted","Data":"c9e6f453e603b7995c75c51722cf529d20219169dafe11c18afa19332a303641"} Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.006480 5104 generic.go:358] "Generic (PLEG): container finished" podID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerID="f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4" exitCode=0 Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.006647 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerDied","Data":"f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4"} Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.006686 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerStarted","Data":"be594ac218d66565c48ea4ba6fb03aee942da618c54be5b4e6d5adac551fbf83"} Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.061840 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.067429 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.071018 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.080615 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.154787 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wksv\" (UniqueName: \"kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.154880 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.154971 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.257005 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wksv\" (UniqueName: \"kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.257090 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.257412 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.257666 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.257804 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.280311 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wksv\" (UniqueName: \"kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv\") pod \"redhat-operators-f55x5\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.362874 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.411516 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.457119 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.458215 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43bd3b33-35f9-480e-9425-26cc2318094f" containerName="collect-profiles" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.458234 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="43bd3b33-35f9-480e-9425-26cc2318094f" containerName="collect-profiles" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.458334 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="43bd3b33-35f9-480e-9425-26cc2318094f" containerName="collect-profiles" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.460788 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmp6h\" (UniqueName: \"kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h\") pod \"43bd3b33-35f9-480e-9425-26cc2318094f\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.461062 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume\") pod \"43bd3b33-35f9-480e-9425-26cc2318094f\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.461097 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume\") pod \"43bd3b33-35f9-480e-9425-26cc2318094f\" (UID: \"43bd3b33-35f9-480e-9425-26cc2318094f\") " Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.462012 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume" (OuterVolumeSpecName: "config-volume") pod "43bd3b33-35f9-480e-9425-26cc2318094f" (UID: "43bd3b33-35f9-480e-9425-26cc2318094f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.476725 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h" (OuterVolumeSpecName: "kube-api-access-gmp6h") pod "43bd3b33-35f9-480e-9425-26cc2318094f" (UID: "43bd3b33-35f9-480e-9425-26cc2318094f"). InnerVolumeSpecName "kube-api-access-gmp6h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.484994 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43bd3b33-35f9-480e-9425-26cc2318094f" (UID: "43bd3b33-35f9-480e-9425-26cc2318094f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.486119 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.486304 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563005 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563082 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563190 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2dn\" (UniqueName: \"kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563243 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmp6h\" (UniqueName: \"kubernetes.io/projected/43bd3b33-35f9-480e-9425-26cc2318094f-kube-api-access-gmp6h\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563257 5104 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43bd3b33-35f9-480e-9425-26cc2318094f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.563268 5104 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bd3b33-35f9-480e-9425-26cc2318094f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.666500 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vs2dn\" (UniqueName: \"kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.666566 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.666612 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.667044 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.667506 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.688999 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs2dn\" (UniqueName: \"kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn\") pod \"redhat-operators-wsrpg\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.803488 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.858237 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:12:32 crc kubenswrapper[5104]: W0130 00:12:32.888933 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d42c1eb_8eda_4e38_a26c_970e32c818bb.slice/crio-c9e23068cd296270d2da6b3fa0c8f2cc09dff0799bd5f17a28d0005f327d5de3 WatchSource:0}: Error finding container c9e23068cd296270d2da6b3fa0c8f2cc09dff0799bd5f17a28d0005f327d5de3: Status 404 returned error can't find the container with id c9e23068cd296270d2da6b3fa0c8f2cc09dff0799bd5f17a28d0005f327d5de3 Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.989071 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:32 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:32 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:32 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:32 crc kubenswrapper[5104]: I0130 00:12:32.989144 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.017959 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" event={"ID":"43bd3b33-35f9-480e-9425-26cc2318094f","Type":"ContainerDied","Data":"58e82b253849134ec372f2bfd32ec725e3fce7e4c97db749aaf4cb0204777e40"} Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.018008 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e82b253849134ec372f2bfd32ec725e3fce7e4c97db749aaf4cb0204777e40" Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.018081 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-7xhcr" Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.020184 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerStarted","Data":"c9e23068cd296270d2da6b3fa0c8f2cc09dff0799bd5f17a28d0005f327d5de3"} Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.078571 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.431513 5104 ???:1] "http: TLS handshake error from 192.168.126.11:50098: no serving certificate available for the kubelet" Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.988378 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:33 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:33 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:33 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:33 crc kubenswrapper[5104]: I0130 00:12:33.988437 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.028440 5104 generic.go:358] "Generic (PLEG): container finished" podID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerID="ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.028690 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerDied","Data":"ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e"} Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.034231 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerID="ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.034322 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerDied","Data":"ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e"} Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.034351 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerStarted","Data":"e95acceba708d04ec20c6844798102a6ead6d5553527669faa0c952eb7185d27"} Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.160837 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-g4wlb" Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.214030 5104 patch_prober.go:28] interesting pod/downloads-747b44746d-kzx6r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.214130 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-kzx6r" podUID="6fd43d75-51fe-42d6-9f2a-adbe6045f25c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.990515 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:34 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:34 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:34 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:34 crc kubenswrapper[5104]: I0130 00:12:34.990569 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:35 crc kubenswrapper[5104]: I0130 00:12:35.556619 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-pdjtd" Jan 30 00:12:35 crc kubenswrapper[5104]: E0130 00:12:35.634204 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:35 crc kubenswrapper[5104]: E0130 00:12:35.635738 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:35 crc kubenswrapper[5104]: E0130 00:12:35.636805 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:35 crc kubenswrapper[5104]: E0130 00:12:35.636863 5104 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:35 crc kubenswrapper[5104]: I0130 00:12:35.839153 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-grfh9" Jan 30 00:12:35 crc kubenswrapper[5104]: I0130 00:12:35.990778 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:35 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:35 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:35 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:35 crc kubenswrapper[5104]: I0130 00:12:35.990899 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:36 crc kubenswrapper[5104]: I0130 00:12:36.849984 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:12:36 crc kubenswrapper[5104]: I0130 00:12:36.851075 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-jcrzz" Jan 30 00:12:36 crc kubenswrapper[5104]: I0130 00:12:36.859143 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-72hww" Jan 30 00:12:36 crc kubenswrapper[5104]: I0130 00:12:36.992361 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:36 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:36 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:36 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:36 crc kubenswrapper[5104]: I0130 00:12:36.992418 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:37 crc kubenswrapper[5104]: I0130 00:12:37.692000 5104 patch_prober.go:28] interesting pod/console-64d44f6ddf-m6rzk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 30 00:12:37 crc kubenswrapper[5104]: I0130 00:12:37.692071 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-m6rzk" podUID="a1f8c00b-3459-4b15-ab8c-52407669c50a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 30 00:12:37 crc kubenswrapper[5104]: I0130 00:12:37.986251 5104 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xs5zv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:37 crc kubenswrapper[5104]: [-]has-synced failed: reason withheld Jan 30 00:12:37 crc kubenswrapper[5104]: [+]process-running ok Jan 30 00:12:37 crc kubenswrapper[5104]: healthz check failed Jan 30 00:12:37 crc kubenswrapper[5104]: I0130 00:12:37.986673 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" podUID="5b96d7cb-4106-4adb-baab-92ec201306e2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:38 crc kubenswrapper[5104]: I0130 00:12:38.580016 5104 ???:1] "http: TLS handshake error from 192.168.126.11:50114: no serving certificate available for the kubelet" Jan 30 00:12:38 crc kubenswrapper[5104]: I0130 00:12:38.985485 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:38 crc kubenswrapper[5104]: I0130 00:12:38.988105 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-xs5zv" Jan 30 00:12:39 crc kubenswrapper[5104]: I0130 00:12:39.112653 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.092540 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerStarted","Data":"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca"} Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.095365 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerStarted","Data":"ab7e8c270555542c881d166e46cb9ace9e66b702e5a37d21508282aae3510ebd"} Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.097751 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerStarted","Data":"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9"} Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.099303 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerStarted","Data":"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e"} Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.101223 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerStarted","Data":"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5"} Jan 30 00:12:40 crc kubenswrapper[5104]: I0130 00:12:40.104222 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerStarted","Data":"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.115063 5104 generic.go:358] "Generic (PLEG): container finished" podID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerID="ab7e8c270555542c881d166e46cb9ace9e66b702e5a37d21508282aae3510ebd" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.115209 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerDied","Data":"ab7e8c270555542c881d166e46cb9ace9e66b702e5a37d21508282aae3510ebd"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.119109 5104 generic.go:358] "Generic (PLEG): container finished" podID="74720252-7847-489b-a755-3c27d70770f9" containerID="7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.119200 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerDied","Data":"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.121387 5104 generic.go:358] "Generic (PLEG): container finished" podID="103981ae-943d-41ab-a2d1-9cafe7669187" containerID="d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.121475 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerDied","Data":"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.124430 5104 generic.go:358] "Generic (PLEG): container finished" podID="ed75038d-3a8a-493b-8fda-d9722d334034" containerID="6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.124544 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerDied","Data":"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.128983 5104 generic.go:358] "Generic (PLEG): container finished" podID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerID="c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.129239 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerDied","Data":"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4"} Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.131023 5104 generic.go:358] "Generic (PLEG): container finished" podID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerID="5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca" exitCode=0 Jan 30 00:12:41 crc kubenswrapper[5104]: I0130 00:12:41.131092 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerDied","Data":"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca"} Jan 30 00:12:42 crc kubenswrapper[5104]: I0130 00:12:42.140504 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerStarted","Data":"17812a1506a9dde514f513e92f6e298cd209ba975960f87bdde3425aa2358aff"} Jan 30 00:12:42 crc kubenswrapper[5104]: I0130 00:12:42.144543 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerStarted","Data":"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a"} Jan 30 00:12:42 crc kubenswrapper[5104]: I0130 00:12:42.206880 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kzfbd" podStartSLOduration=5.433128869 podStartE2EDuration="14.206816447s" podCreationTimestamp="2026-01-30 00:12:28 +0000 UTC" firstStartedPulling="2026-01-30 00:12:30.979059016 +0000 UTC m=+131.711398245" lastFinishedPulling="2026-01-30 00:12:39.752746584 +0000 UTC m=+140.485085823" observedRunningTime="2026-01-30 00:12:42.198621456 +0000 UTC m=+142.930960725" watchObservedRunningTime="2026-01-30 00:12:42.206816447 +0000 UTC m=+142.939155716" Jan 30 00:12:43 crc kubenswrapper[5104]: I0130 00:12:43.154125 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerStarted","Data":"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5"} Jan 30 00:12:43 crc kubenswrapper[5104]: I0130 00:12:43.156986 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerStarted","Data":"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b"} Jan 30 00:12:43 crc kubenswrapper[5104]: I0130 00:12:43.182343 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6xks" podStartSLOduration=5.34357398 podStartE2EDuration="15.182324899s" podCreationTimestamp="2026-01-30 00:12:28 +0000 UTC" firstStartedPulling="2026-01-30 00:12:29.893890683 +0000 UTC m=+130.626229902" lastFinishedPulling="2026-01-30 00:12:39.732641602 +0000 UTC m=+140.464980821" observedRunningTime="2026-01-30 00:12:43.180593092 +0000 UTC m=+143.912932351" watchObservedRunningTime="2026-01-30 00:12:43.182324899 +0000 UTC m=+143.914664128" Jan 30 00:12:43 crc kubenswrapper[5104]: I0130 00:12:43.204324 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dxc6g" podStartSLOduration=5.420670616 podStartE2EDuration="14.204291022s" podCreationTimestamp="2026-01-30 00:12:29 +0000 UTC" firstStartedPulling="2026-01-30 00:12:30.970490075 +0000 UTC m=+131.702829294" lastFinishedPulling="2026-01-30 00:12:39.754110471 +0000 UTC m=+140.486449700" observedRunningTime="2026-01-30 00:12:43.201316092 +0000 UTC m=+143.933655341" watchObservedRunningTime="2026-01-30 00:12:43.204291022 +0000 UTC m=+143.936630281" Jan 30 00:12:44 crc kubenswrapper[5104]: I0130 00:12:44.246889 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-kzx6r" Jan 30 00:12:44 crc kubenswrapper[5104]: I0130 00:12:44.270872 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whc9q" podStartSLOduration=6.503863668 podStartE2EDuration="14.270810201s" podCreationTimestamp="2026-01-30 00:12:30 +0000 UTC" firstStartedPulling="2026-01-30 00:12:32.007345122 +0000 UTC m=+132.739684341" lastFinishedPulling="2026-01-30 00:12:39.774291655 +0000 UTC m=+140.506630874" observedRunningTime="2026-01-30 00:12:44.18521715 +0000 UTC m=+144.917556369" watchObservedRunningTime="2026-01-30 00:12:44.270810201 +0000 UTC m=+145.003149420" Jan 30 00:12:45 crc kubenswrapper[5104]: E0130 00:12:45.635532 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:45 crc kubenswrapper[5104]: E0130 00:12:45.637011 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:45 crc kubenswrapper[5104]: E0130 00:12:45.638362 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:45 crc kubenswrapper[5104]: E0130 00:12:45.638397 5104 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:47 crc kubenswrapper[5104]: I0130 00:12:47.752099 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:47 crc kubenswrapper[5104]: I0130 00:12:47.758372 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-m6rzk" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.369566 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.369979 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.371759 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.371833 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.397105 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.477683 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.525448 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:48 crc kubenswrapper[5104]: I0130 00:12:48.846530 5104 ???:1] "http: TLS handshake error from 192.168.126.11:59134: no serving certificate available for the kubelet" Jan 30 00:12:48 crc kubenswrapper[5104]: W0130 00:12:48.995678 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-8563987ac154b9a2acf52655b2595b9291a1b122dc05a82f8d2e12464989d7b3 WatchSource:0}: Error finding container 8563987ac154b9a2acf52655b2595b9291a1b122dc05a82f8d2e12464989d7b3: Status 404 returned error can't find the container with id 8563987ac154b9a2acf52655b2595b9291a1b122dc05a82f8d2e12464989d7b3 Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.010480 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.010519 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.198601 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerStarted","Data":"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e"} Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.200234 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"8563987ac154b9a2acf52655b2595b9291a1b122dc05a82f8d2e12464989d7b3"} Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.203089 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerStarted","Data":"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4"} Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.207169 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.207391 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.227949 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dvdcz" podStartSLOduration=11.438715761 podStartE2EDuration="20.227932948s" podCreationTimestamp="2026-01-30 00:12:29 +0000 UTC" firstStartedPulling="2026-01-30 00:12:30.957193165 +0000 UTC m=+131.689532384" lastFinishedPulling="2026-01-30 00:12:39.746410352 +0000 UTC m=+140.478749571" observedRunningTime="2026-01-30 00:12:49.226263492 +0000 UTC m=+149.958602721" watchObservedRunningTime="2026-01-30 00:12:49.227932948 +0000 UTC m=+149.960272187" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.263065 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgbcw" podStartSLOduration=10.528464617000001 podStartE2EDuration="18.263034146s" podCreationTimestamp="2026-01-30 00:12:31 +0000 UTC" firstStartedPulling="2026-01-30 00:12:32.011938317 +0000 UTC m=+132.744277536" lastFinishedPulling="2026-01-30 00:12:39.746507846 +0000 UTC m=+140.478847065" observedRunningTime="2026-01-30 00:12:49.258731079 +0000 UTC m=+149.991070338" watchObservedRunningTime="2026-01-30 00:12:49.263034146 +0000 UTC m=+149.995373415" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.392702 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.393071 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.593978 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:49 crc kubenswrapper[5104]: I0130 00:12:49.594280 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.134120 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.137680 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.143211 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.145840 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.210250 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"735224f6883b98c77d2c7a10f0081e5ddaa76a4b83de11b7d4fa6cff32cc1c1d"} Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.265334 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.276737 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.294945 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.990813 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:50 crc kubenswrapper[5104]: I0130 00:12:50.990903 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:51 crc kubenswrapper[5104]: I0130 00:12:51.043226 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:51 crc kubenswrapper[5104]: I0130 00:12:51.268715 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:12:51 crc kubenswrapper[5104]: I0130 00:12:51.420915 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:51 crc kubenswrapper[5104]: I0130 00:12:51.421226 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:51 crc kubenswrapper[5104]: I0130 00:12:51.465969 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:12:52 crc kubenswrapper[5104]: I0130 00:12:52.450389 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:53 crc kubenswrapper[5104]: I0130 00:12:53.033753 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:12:53 crc kubenswrapper[5104]: I0130 00:12:53.228133 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dxc6g" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="registry-server" containerID="cri-o://17812a1506a9dde514f513e92f6e298cd209ba975960f87bdde3425aa2358aff" gracePeriod=2 Jan 30 00:12:53 crc kubenswrapper[5104]: I0130 00:12:53.229743 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:54 crc kubenswrapper[5104]: I0130 00:12:54.235072 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerStarted","Data":"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978"} Jan 30 00:12:54 crc kubenswrapper[5104]: I0130 00:12:54.236431 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerStarted","Data":"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79"} Jan 30 00:12:54 crc kubenswrapper[5104]: I0130 00:12:54.237941 5104 generic.go:358] "Generic (PLEG): container finished" podID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerID="17812a1506a9dde514f513e92f6e298cd209ba975960f87bdde3425aa2358aff" exitCode=0 Jan 30 00:12:54 crc kubenswrapper[5104]: I0130 00:12:54.237975 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerDied","Data":"17812a1506a9dde514f513e92f6e298cd209ba975960f87bdde3425aa2358aff"} Jan 30 00:12:54 crc kubenswrapper[5104]: I0130 00:12:54.986446 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.073738 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x69cg\" (UniqueName: \"kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg\") pod \"c2b5358a-627a-4a04-8bbf-7865a366375b\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.073813 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities\") pod \"c2b5358a-627a-4a04-8bbf-7865a366375b\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.073890 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content\") pod \"c2b5358a-627a-4a04-8bbf-7865a366375b\" (UID: \"c2b5358a-627a-4a04-8bbf-7865a366375b\") " Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.075179 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities" (OuterVolumeSpecName: "utilities") pod "c2b5358a-627a-4a04-8bbf-7865a366375b" (UID: "c2b5358a-627a-4a04-8bbf-7865a366375b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.088521 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg" (OuterVolumeSpecName: "kube-api-access-x69cg") pod "c2b5358a-627a-4a04-8bbf-7865a366375b" (UID: "c2b5358a-627a-4a04-8bbf-7865a366375b"). InnerVolumeSpecName "kube-api-access-x69cg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.129116 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2b5358a-627a-4a04-8bbf-7865a366375b" (UID: "c2b5358a-627a-4a04-8bbf-7865a366375b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.175409 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x69cg\" (UniqueName: \"kubernetes.io/projected/c2b5358a-627a-4a04-8bbf-7865a366375b-kube-api-access-x69cg\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.175446 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.175458 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2b5358a-627a-4a04-8bbf-7865a366375b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.244655 5104 generic.go:358] "Generic (PLEG): container finished" podID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerID="5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978" exitCode=0 Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.244728 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerDied","Data":"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978"} Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.246474 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerID="2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79" exitCode=0 Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.246608 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerDied","Data":"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79"} Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.249577 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxc6g" event={"ID":"c2b5358a-627a-4a04-8bbf-7865a366375b","Type":"ContainerDied","Data":"5ecd8e558146142479e9b9390cec4545f22e8b5e9c7fe458ec99c2cee7861a83"} Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.249630 5104 scope.go:117] "RemoveContainer" containerID="17812a1506a9dde514f513e92f6e298cd209ba975960f87bdde3425aa2358aff" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.249775 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxc6g" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.282871 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.283423 5104 scope.go:117] "RemoveContainer" containerID="ab7e8c270555542c881d166e46cb9ace9e66b702e5a37d21508282aae3510ebd" Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.286647 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dxc6g"] Jan 30 00:12:55 crc kubenswrapper[5104]: I0130 00:12:55.317170 5104 scope.go:117] "RemoveContainer" containerID="7fb594e6cea1aab0bfc5d1d7da2cf17895207b0d375d4f8e9d329fc9da784a3c" Jan 30 00:12:55 crc kubenswrapper[5104]: E0130 00:12:55.633367 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:55 crc kubenswrapper[5104]: E0130 00:12:55.635416 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:55 crc kubenswrapper[5104]: E0130 00:12:55.636883 5104 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:55 crc kubenswrapper[5104]: E0130 00:12:55.636931 5104 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:56 crc kubenswrapper[5104]: I0130 00:12:56.256898 5104 generic.go:358] "Generic (PLEG): container finished" podID="302f79c1-a693-494c-9a1b-360a59d439f5" containerID="e02dca27cb8af76deb0425b9190bc0043c87a255dfa1c4bd9510ec06dab8b283" exitCode=0 Jan 30 00:12:56 crc kubenswrapper[5104]: I0130 00:12:56.511022 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-4kxpc" event={"ID":"302f79c1-a693-494c-9a1b-360a59d439f5","Type":"ContainerDied","Data":"e02dca27cb8af76deb0425b9190bc0043c87a255dfa1c4bd9510ec06dab8b283"} Jan 30 00:12:56 crc kubenswrapper[5104]: I0130 00:12:56.542090 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" path="/var/lib/kubelet/pods/c2b5358a-627a-4a04-8bbf-7865a366375b/volumes" Jan 30 00:12:56 crc kubenswrapper[5104]: I0130 00:12:56.917798 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-6rcx2" Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.265479 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerStarted","Data":"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474"} Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.499174 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.518158 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglfk\" (UniqueName: \"kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk\") pod \"302f79c1-a693-494c-9a1b-360a59d439f5\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.518254 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca\") pod \"302f79c1-a693-494c-9a1b-360a59d439f5\" (UID: \"302f79c1-a693-494c-9a1b-360a59d439f5\") " Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.519021 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca" (OuterVolumeSpecName: "serviceca") pod "302f79c1-a693-494c-9a1b-360a59d439f5" (UID: "302f79c1-a693-494c-9a1b-360a59d439f5"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.525626 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk" (OuterVolumeSpecName: "kube-api-access-mglfk") pod "302f79c1-a693-494c-9a1b-360a59d439f5" (UID: "302f79c1-a693-494c-9a1b-360a59d439f5"). InnerVolumeSpecName "kube-api-access-mglfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.619237 5104 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/302f79c1-a693-494c-9a1b-360a59d439f5-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:57 crc kubenswrapper[5104]: I0130 00:12:57.619265 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mglfk\" (UniqueName: \"kubernetes.io/projected/302f79c1-a693-494c-9a1b-360a59d439f5-kube-api-access-mglfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.206947 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2f4tq_3b2a92e1-d95a-4a3e-a07e-62e5100931bb/kube-multus-additional-cni-plugins/0.log" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.207214 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.224525 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist\") pod \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.224673 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdkn\" (UniqueName: \"kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn\") pod \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.224875 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir\") pod \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.224911 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready\") pod \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\" (UID: \"3b2a92e1-d95a-4a3e-a07e-62e5100931bb\") " Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.224975 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "3b2a92e1-d95a-4a3e-a07e-62e5100931bb" (UID: "3b2a92e1-d95a-4a3e-a07e-62e5100931bb"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.225243 5104 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.225252 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "3b2a92e1-d95a-4a3e-a07e-62e5100931bb" (UID: "3b2a92e1-d95a-4a3e-a07e-62e5100931bb"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.225263 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready" (OuterVolumeSpecName: "ready") pod "3b2a92e1-d95a-4a3e-a07e-62e5100931bb" (UID: "3b2a92e1-d95a-4a3e-a07e-62e5100931bb"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.229828 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn" (OuterVolumeSpecName: "kube-api-access-5mdkn") pod "3b2a92e1-d95a-4a3e-a07e-62e5100931bb" (UID: "3b2a92e1-d95a-4a3e-a07e-62e5100931bb"). InnerVolumeSpecName "kube-api-access-5mdkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272489 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2f4tq_3b2a92e1-d95a-4a3e-a07e-62e5100931bb/kube-multus-additional-cni-plugins/0.log" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272539 5104 generic.go:358] "Generic (PLEG): container finished" podID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" exitCode=137 Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272669 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" event={"ID":"3b2a92e1-d95a-4a3e-a07e-62e5100931bb","Type":"ContainerDied","Data":"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42"} Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272717 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272741 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2f4tq" event={"ID":"3b2a92e1-d95a-4a3e-a07e-62e5100931bb","Type":"ContainerDied","Data":"061781b0b7a45da2281f7da3c5e490d113b2969ea0a87d7867374ded90c21363"} Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.272774 5104 scope.go:117] "RemoveContainer" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.274887 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-4kxpc" event={"ID":"302f79c1-a693-494c-9a1b-360a59d439f5","Type":"ContainerDied","Data":"4d9873237ec687e3e157e08b63838f7772090bfac9fb751ba58e8bf6f0053cf0"} Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.274919 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9873237ec687e3e157e08b63838f7772090bfac9fb751ba58e8bf6f0053cf0" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.274923 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-4kxpc" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.277841 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerStarted","Data":"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36"} Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.304956 5104 scope.go:117] "RemoveContainer" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" Jan 30 00:12:58 crc kubenswrapper[5104]: E0130 00:12:58.306602 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42\": container with ID starting with 6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42 not found: ID does not exist" containerID="6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.306651 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42"} err="failed to get container status \"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42\": rpc error: code = NotFound desc = could not find container \"6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42\": container with ID starting with 6d7c87fb3a68f735f4c537d11449d981309ba9768c4e831b86b0eb98080f3b42 not found: ID does not exist" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.316461 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f55x5" podStartSLOduration=9.035781028 podStartE2EDuration="26.316440324s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:34.029390654 +0000 UTC m=+134.761729873" lastFinishedPulling="2026-01-30 00:12:51.31004993 +0000 UTC m=+152.042389169" observedRunningTime="2026-01-30 00:12:58.312769416 +0000 UTC m=+159.045108655" watchObservedRunningTime="2026-01-30 00:12:58.316440324 +0000 UTC m=+159.048779553" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.326164 5104 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-ready\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.326209 5104 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.326224 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mdkn\" (UniqueName: \"kubernetes.io/projected/3b2a92e1-d95a-4a3e-a07e-62e5100931bb-kube-api-access-5mdkn\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.338936 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsrpg" podStartSLOduration=9.053768874 podStartE2EDuration="26.338912891s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:34.035027126 +0000 UTC m=+134.767366345" lastFinishedPulling="2026-01-30 00:12:51.320171153 +0000 UTC m=+152.052510362" observedRunningTime="2026-01-30 00:12:58.335111799 +0000 UTC m=+159.067451048" watchObservedRunningTime="2026-01-30 00:12:58.338912891 +0000 UTC m=+159.071252130" Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.348415 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2f4tq"] Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.351698 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2f4tq"] Jan 30 00:12:58 crc kubenswrapper[5104]: I0130 00:12:58.531615 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" path="/var/lib/kubelet/pods/3b2a92e1-d95a-4a3e-a07e-62e5100931bb/volumes" Jan 30 00:13:01 crc kubenswrapper[5104]: I0130 00:13:01.257915 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.248210 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.249126 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dvdcz" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="registry-server" containerID="cri-o://c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e" gracePeriod=2 Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.412300 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.412526 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.458571 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.616360 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.681826 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content\") pod \"92919674-8c7c-46d5-a719-aaf5b45bbc45\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.681892 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities\") pod \"92919674-8c7c-46d5-a719-aaf5b45bbc45\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.681927 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdrr6\" (UniqueName: \"kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6\") pod \"92919674-8c7c-46d5-a719-aaf5b45bbc45\" (UID: \"92919674-8c7c-46d5-a719-aaf5b45bbc45\") " Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.683999 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities" (OuterVolumeSpecName: "utilities") pod "92919674-8c7c-46d5-a719-aaf5b45bbc45" (UID: "92919674-8c7c-46d5-a719-aaf5b45bbc45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.689436 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6" (OuterVolumeSpecName: "kube-api-access-qdrr6") pod "92919674-8c7c-46d5-a719-aaf5b45bbc45" (UID: "92919674-8c7c-46d5-a719-aaf5b45bbc45"). InnerVolumeSpecName "kube-api-access-qdrr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.710080 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92919674-8c7c-46d5-a719-aaf5b45bbc45" (UID: "92919674-8c7c-46d5-a719-aaf5b45bbc45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.783641 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.783689 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92919674-8c7c-46d5-a719-aaf5b45bbc45-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.783704 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdrr6\" (UniqueName: \"kubernetes.io/projected/92919674-8c7c-46d5-a719-aaf5b45bbc45-kube-api-access-qdrr6\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.804165 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:02 crc kubenswrapper[5104]: I0130 00:13:02.804219 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.030195 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.308769 5104 generic.go:358] "Generic (PLEG): container finished" podID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerID="c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e" exitCode=0 Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.308935 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dvdcz" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.309333 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerDied","Data":"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e"} Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.309526 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dvdcz" event={"ID":"92919674-8c7c-46d5-a719-aaf5b45bbc45","Type":"ContainerDied","Data":"6b891c25571f6725bc0ed860a5c10385a3fcb11fc867cadd5b0fe8b890dddb81"} Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.309673 5104 scope.go:117] "RemoveContainer" containerID="c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.341782 5104 scope.go:117] "RemoveContainer" containerID="5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.363748 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.369551 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dvdcz"] Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.378144 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.384255 5104 scope.go:117] "RemoveContainer" containerID="75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.452787 5104 scope.go:117] "RemoveContainer" containerID="c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e" Jan 30 00:13:03 crc kubenswrapper[5104]: E0130 00:13:03.453363 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e\": container with ID starting with c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e not found: ID does not exist" containerID="c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.453426 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e"} err="failed to get container status \"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e\": rpc error: code = NotFound desc = could not find container \"c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e\": container with ID starting with c0b0a5ec773f013b62de5bef50a71f4c93f652d6e08c80600f956a25d85cde4e not found: ID does not exist" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.453471 5104 scope.go:117] "RemoveContainer" containerID="5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca" Jan 30 00:13:03 crc kubenswrapper[5104]: E0130 00:13:03.453956 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca\": container with ID starting with 5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca not found: ID does not exist" containerID="5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.454020 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca"} err="failed to get container status \"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca\": rpc error: code = NotFound desc = could not find container \"5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca\": container with ID starting with 5443472ca1a0103100598fdcb5b298331836652482b8db4dbe994b84880b0dca not found: ID does not exist" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.454059 5104 scope.go:117] "RemoveContainer" containerID="75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f" Jan 30 00:13:03 crc kubenswrapper[5104]: E0130 00:13:03.454426 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f\": container with ID starting with 75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f not found: ID does not exist" containerID="75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.454483 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f"} err="failed to get container status \"75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f\": rpc error: code = NotFound desc = could not find container \"75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f\": container with ID starting with 75956d28d1478e1df89a0c5106cd51c3ad1f4dd399ab00aa683ab18b1a5f701f not found: ID does not exist" Jan 30 00:13:03 crc kubenswrapper[5104]: I0130 00:13:03.839165 5104 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsrpg" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="registry-server" probeResult="failure" output=< Jan 30 00:13:03 crc kubenswrapper[5104]: timeout: failed to connect service ":50051" within 1s Jan 30 00:13:03 crc kubenswrapper[5104]: > Jan 30 00:13:04 crc kubenswrapper[5104]: I0130 00:13:04.536790 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" path="/var/lib/kubelet/pods/92919674-8c7c-46d5-a719-aaf5b45bbc45/volumes" Jan 30 00:13:04 crc kubenswrapper[5104]: I0130 00:13:04.652808 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:13:04 crc kubenswrapper[5104]: I0130 00:13:04.653276 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgbcw" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="registry-server" containerID="cri-o://b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4" gracePeriod=2 Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.018107 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.115380 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mntz\" (UniqueName: \"kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz\") pod \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.115429 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content\") pod \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.115540 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities\") pod \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\" (UID: \"a08f83ac-7f15-48c7-a0dd-406cbdb64831\") " Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.116650 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities" (OuterVolumeSpecName: "utilities") pod "a08f83ac-7f15-48c7-a0dd-406cbdb64831" (UID: "a08f83ac-7f15-48c7-a0dd-406cbdb64831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.116926 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.128016 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz" (OuterVolumeSpecName: "kube-api-access-7mntz") pod "a08f83ac-7f15-48c7-a0dd-406cbdb64831" (UID: "a08f83ac-7f15-48c7-a0dd-406cbdb64831"). InnerVolumeSpecName "kube-api-access-7mntz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.128256 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a08f83ac-7f15-48c7-a0dd-406cbdb64831" (UID: "a08f83ac-7f15-48c7-a0dd-406cbdb64831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.218465 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7mntz\" (UniqueName: \"kubernetes.io/projected/a08f83ac-7f15-48c7-a0dd-406cbdb64831-kube-api-access-7mntz\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.218507 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08f83ac-7f15-48c7-a0dd-406cbdb64831-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.329966 5104 generic.go:358] "Generic (PLEG): container finished" podID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerID="b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4" exitCode=0 Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.330078 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerDied","Data":"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4"} Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.330154 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgbcw" event={"ID":"a08f83ac-7f15-48c7-a0dd-406cbdb64831","Type":"ContainerDied","Data":"be594ac218d66565c48ea4ba6fb03aee942da618c54be5b4e6d5adac551fbf83"} Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.330180 5104 scope.go:117] "RemoveContainer" containerID="b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.330108 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgbcw" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.372270 5104 scope.go:117] "RemoveContainer" containerID="c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.374435 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.380263 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgbcw"] Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.388983 5104 scope.go:117] "RemoveContainer" containerID="f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.408137 5104 scope.go:117] "RemoveContainer" containerID="b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4" Jan 30 00:13:05 crc kubenswrapper[5104]: E0130 00:13:05.408524 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4\": container with ID starting with b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4 not found: ID does not exist" containerID="b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.408554 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4"} err="failed to get container status \"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4\": rpc error: code = NotFound desc = could not find container \"b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4\": container with ID starting with b8966f0bb7b4bc37cf0f3d04dd8ddffa3e49af3dc4e0cf6559dada383486faa4 not found: ID does not exist" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.408573 5104 scope.go:117] "RemoveContainer" containerID="c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4" Jan 30 00:13:05 crc kubenswrapper[5104]: E0130 00:13:05.409478 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4\": container with ID starting with c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4 not found: ID does not exist" containerID="c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.409519 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4"} err="failed to get container status \"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4\": rpc error: code = NotFound desc = could not find container \"c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4\": container with ID starting with c6334b1af8203664b0f43b5a29e2141ca83d6957c276d1bbea94d154be5d26b4 not found: ID does not exist" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.409546 5104 scope.go:117] "RemoveContainer" containerID="f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4" Jan 30 00:13:05 crc kubenswrapper[5104]: E0130 00:13:05.409986 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4\": container with ID starting with f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4 not found: ID does not exist" containerID="f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4" Jan 30 00:13:05 crc kubenswrapper[5104]: I0130 00:13:05.410073 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4"} err="failed to get container status \"f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4\": rpc error: code = NotFound desc = could not find container \"f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4\": container with ID starting with f20009af17ecd20c5497675d6047003490364c6ce43003b17bb44cb34bf0bcd4 not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5104]: I0130 00:13:06.533434 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" path="/var/lib/kubelet/pods/a08f83ac-7f15-48c7-a0dd-406cbdb64831/volumes" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.702830 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703471 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703484 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703502 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703510 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703521 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703527 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703538 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703543 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703555 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703560 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703567 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703572 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="extract-utilities" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703579 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703585 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703593 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703598 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703609 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703615 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703626 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703631 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="extract-content" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703639 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="302f79c1-a693-494c-9a1b-360a59d439f5" containerName="image-pruner" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703644 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="302f79c1-a693-494c-9a1b-360a59d439f5" containerName="image-pruner" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703733 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="92919674-8c7c-46d5-a719-aaf5b45bbc45" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703745 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b2a92e1-d95a-4a3e-a07e-62e5100931bb" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703752 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2b5358a-627a-4a04-8bbf-7865a366375b" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703760 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="a08f83ac-7f15-48c7-a0dd-406cbdb64831" containerName="registry-server" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.703768 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="302f79c1-a693-494c-9a1b-360a59d439f5" containerName="image-pruner" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.737225 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.737398 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.741601 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.741911 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.847108 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.847491 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.949102 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.949179 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.949537 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:07 crc kubenswrapper[5104]: I0130 00:13:07.972891 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:08 crc kubenswrapper[5104]: I0130 00:13:08.058589 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:08 crc kubenswrapper[5104]: I0130 00:13:08.307711 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:08 crc kubenswrapper[5104]: W0130 00:13:08.322963 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7f4c64b0_a1bd_49f2_8072_31234c587328.slice/crio-2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29 WatchSource:0}: Error finding container 2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29: Status 404 returned error can't find the container with id 2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29 Jan 30 00:13:08 crc kubenswrapper[5104]: I0130 00:13:08.346812 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"7f4c64b0-a1bd-49f2-8072-31234c587328","Type":"ContainerStarted","Data":"2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29"} Jan 30 00:13:09 crc kubenswrapper[5104]: I0130 00:13:09.347655 5104 ???:1] "http: TLS handshake error from 192.168.126.11:34492: no serving certificate available for the kubelet" Jan 30 00:13:09 crc kubenswrapper[5104]: I0130 00:13:09.353952 5104 generic.go:358] "Generic (PLEG): container finished" podID="7f4c64b0-a1bd-49f2-8072-31234c587328" containerID="c288b1175732068300cabe7bdef70e48c6c887ac730a431f71696673e53fea4e" exitCode=0 Jan 30 00:13:09 crc kubenswrapper[5104]: I0130 00:13:09.354060 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"7f4c64b0-a1bd-49f2-8072-31234c587328","Type":"ContainerDied","Data":"c288b1175732068300cabe7bdef70e48c6c887ac730a431f71696673e53fea4e"} Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.615117 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.688422 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access\") pod \"7f4c64b0-a1bd-49f2-8072-31234c587328\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.688537 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir\") pod \"7f4c64b0-a1bd-49f2-8072-31234c587328\" (UID: \"7f4c64b0-a1bd-49f2-8072-31234c587328\") " Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.688671 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f4c64b0-a1bd-49f2-8072-31234c587328" (UID: "7f4c64b0-a1bd-49f2-8072-31234c587328"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.688796 5104 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f4c64b0-a1bd-49f2-8072-31234c587328-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.695041 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f4c64b0-a1bd-49f2-8072-31234c587328" (UID: "7f4c64b0-a1bd-49f2-8072-31234c587328"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5104]: I0130 00:13:10.790054 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f4c64b0-a1bd-49f2-8072-31234c587328-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:11 crc kubenswrapper[5104]: I0130 00:13:11.364825 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"7f4c64b0-a1bd-49f2-8072-31234c587328","Type":"ContainerDied","Data":"2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29"} Jan 30 00:13:11 crc kubenswrapper[5104]: I0130 00:13:11.365216 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2936e0dd43318560fece10779aa012b4e4377ab6859200a5ce2bdcda4945ac29" Jan 30 00:13:11 crc kubenswrapper[5104]: I0130 00:13:11.364868 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.286943 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.289599 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f4c64b0-a1bd-49f2-8072-31234c587328" containerName="pruner" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.289622 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4c64b0-a1bd-49f2-8072-31234c587328" containerName="pruner" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.289756 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f4c64b0-a1bd-49f2-8072-31234c587328" containerName="pruner" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.322264 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.322471 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.325987 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.326023 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.412836 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.413215 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.413364 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.514663 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.515022 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.514825 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.515128 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.515279 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.548287 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access\") pod \"installer-12-crc\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.654549 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.840705 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:12 crc kubenswrapper[5104]: W0130 00:13:12.849283 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod60001511_f1e7_4c9e_9c1c_812709496c6c.slice/crio-118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab WatchSource:0}: Error finding container 118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab: Status 404 returned error can't find the container with id 118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.862215 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:12 crc kubenswrapper[5104]: I0130 00:13:12.904158 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:13 crc kubenswrapper[5104]: I0130 00:13:13.091904 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:13:13 crc kubenswrapper[5104]: I0130 00:13:13.383967 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"60001511-f1e7-4c9e-9c1c-812709496c6c","Type":"ContainerStarted","Data":"69d5cb6cb4ac809645a02f0ecbb666f5d1b674c1adf1fa4600484a216213d523"} Jan 30 00:13:13 crc kubenswrapper[5104]: I0130 00:13:13.384233 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"60001511-f1e7-4c9e-9c1c-812709496c6c","Type":"ContainerStarted","Data":"118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab"} Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.390698 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsrpg" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="registry-server" containerID="cri-o://86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36" gracePeriod=2 Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.716627 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.732705 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.732686139 podStartE2EDuration="2.732686139s" podCreationTimestamp="2026-01-30 00:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:13.402897934 +0000 UTC m=+174.135237153" watchObservedRunningTime="2026-01-30 00:13:14.732686139 +0000 UTC m=+175.465025358" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.881206 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs2dn\" (UniqueName: \"kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn\") pod \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.881278 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content\") pod \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.881310 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities\") pod \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\" (UID: \"3a05a19c-08be-4e1f-bc16-c3a165ad82d5\") " Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.882488 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities" (OuterVolumeSpecName: "utilities") pod "3a05a19c-08be-4e1f-bc16-c3a165ad82d5" (UID: "3a05a19c-08be-4e1f-bc16-c3a165ad82d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.887111 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn" (OuterVolumeSpecName: "kube-api-access-vs2dn") pod "3a05a19c-08be-4e1f-bc16-c3a165ad82d5" (UID: "3a05a19c-08be-4e1f-bc16-c3a165ad82d5"). InnerVolumeSpecName "kube-api-access-vs2dn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.984661 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vs2dn\" (UniqueName: \"kubernetes.io/projected/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-kube-api-access-vs2dn\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.984699 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:14 crc kubenswrapper[5104]: I0130 00:13:14.995072 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a05a19c-08be-4e1f-bc16-c3a165ad82d5" (UID: "3a05a19c-08be-4e1f-bc16-c3a165ad82d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.085910 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a05a19c-08be-4e1f-bc16-c3a165ad82d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.398350 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerID="86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36" exitCode=0 Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.398405 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerDied","Data":"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36"} Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.398509 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsrpg" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.398810 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsrpg" event={"ID":"3a05a19c-08be-4e1f-bc16-c3a165ad82d5","Type":"ContainerDied","Data":"e95acceba708d04ec20c6844798102a6ead6d5553527669faa0c952eb7185d27"} Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.398836 5104 scope.go:117] "RemoveContainer" containerID="86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.416196 5104 scope.go:117] "RemoveContainer" containerID="2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.428262 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.430585 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsrpg"] Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.436057 5104 scope.go:117] "RemoveContainer" containerID="ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.453774 5104 scope.go:117] "RemoveContainer" containerID="86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36" Jan 30 00:13:15 crc kubenswrapper[5104]: E0130 00:13:15.454282 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36\": container with ID starting with 86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36 not found: ID does not exist" containerID="86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.454324 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36"} err="failed to get container status \"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36\": rpc error: code = NotFound desc = could not find container \"86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36\": container with ID starting with 86d65020d1c2bdaff01b0e07581b82dd69c7f98b0fa67a787ec11e902101ca36 not found: ID does not exist" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.454346 5104 scope.go:117] "RemoveContainer" containerID="2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79" Jan 30 00:13:15 crc kubenswrapper[5104]: E0130 00:13:15.454669 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79\": container with ID starting with 2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79 not found: ID does not exist" containerID="2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.454690 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79"} err="failed to get container status \"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79\": rpc error: code = NotFound desc = could not find container \"2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79\": container with ID starting with 2d269c0773cadaffb869e8f3a8a0571f0d4042d798adfcd696128c32095b6a79 not found: ID does not exist" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.454703 5104 scope.go:117] "RemoveContainer" containerID="ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e" Jan 30 00:13:15 crc kubenswrapper[5104]: E0130 00:13:15.455016 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e\": container with ID starting with ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e not found: ID does not exist" containerID="ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e" Jan 30 00:13:15 crc kubenswrapper[5104]: I0130 00:13:15.455070 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e"} err="failed to get container status \"ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e\": rpc error: code = NotFound desc = could not find container \"ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e\": container with ID starting with ed19f6e08c1c192d4a0b53f539c81c9ddb443c457064e5eb8e7670702707f03e not found: ID does not exist" Jan 30 00:13:16 crc kubenswrapper[5104]: I0130 00:13:16.536767 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" path="/var/lib/kubelet/pods/3a05a19c-08be-4e1f-bc16-c3a165ad82d5/volumes" Jan 30 00:13:24 crc kubenswrapper[5104]: I0130 00:13:24.320090 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.349329 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerName="oauth-openshift" containerID="cri-o://0ab9b2bb77fcaead421f25524b00b1e84579a0a28da49dbf7861e4ab78eb4ada" gracePeriod=15 Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.607401 5104 generic.go:358] "Generic (PLEG): container finished" podID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerID="0ab9b2bb77fcaead421f25524b00b1e84579a0a28da49dbf7861e4ab78eb4ada" exitCode=0 Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.607515 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" event={"ID":"c47b4509-0bb1-4360-9db3-29ebfcd734e3","Type":"ContainerDied","Data":"0ab9b2bb77fcaead421f25524b00b1e84579a0a28da49dbf7861e4ab78eb4ada"} Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.788736 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.826655 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6969b58588-z5d6p"] Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827524 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="extract-utilities" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827556 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="extract-utilities" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827578 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="extract-content" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827589 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="extract-content" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827623 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerName="oauth-openshift" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827634 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerName="oauth-openshift" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827649 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="registry-server" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827658 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="registry-server" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.827829 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a05a19c-08be-4e1f-bc16-c3a165ad82d5" containerName="registry-server" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.828206 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" containerName="oauth-openshift" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.832343 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.847286 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6969b58588-z5d6p"] Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906465 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906530 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906555 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906613 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906645 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnt95\" (UniqueName: \"kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906672 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906738 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906771 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906799 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906873 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906905 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906965 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.906990 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907025 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login\") pod \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\" (UID: \"c47b4509-0bb1-4360-9db3-29ebfcd734e3\") " Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907181 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907213 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907243 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907292 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-policies\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907326 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907361 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-login\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907383 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-dir\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907407 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907426 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-error\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907450 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907470 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhrs\" (UniqueName: \"kubernetes.io/projected/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-kube-api-access-7mhrs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907503 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907502 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907523 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-session\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907553 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907618 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907659 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907672 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907703 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.907729 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.908336 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.913191 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.913191 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.913454 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.913768 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.916008 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95" (OuterVolumeSpecName: "kube-api-access-wnt95") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "kube-api-access-wnt95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.916103 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.916285 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.916557 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5104]: I0130 00:13:49.916737 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c47b4509-0bb1-4360-9db3-29ebfcd734e3" (UID: "c47b4509-0bb1-4360-9db3-29ebfcd734e3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.008729 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.008795 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhrs\" (UniqueName: \"kubernetes.io/projected/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-kube-api-access-7mhrs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.008845 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009050 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-session\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009112 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009186 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009219 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009300 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009407 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-policies\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009496 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009584 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-login\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009622 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-dir\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009711 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.009792 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-error\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.010007 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.010620 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.010701 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.010877 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-dir\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011280 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wnt95\" (UniqueName: \"kubernetes.io/projected/c47b4509-0bb1-4360-9db3-29ebfcd734e3-kube-api-access-wnt95\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011331 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011364 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011398 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011404 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011427 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-audit-policies\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011425 5104 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011494 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011526 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011556 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011583 5104 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c47b4509-0bb1-4360-9db3-29ebfcd734e3-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.011608 5104 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c47b4509-0bb1-4360-9db3-29ebfcd734e3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.012252 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.014074 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-login\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.014096 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.014971 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.015144 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.015362 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.015510 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-system-session\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.015808 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-error\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.018318 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.026487 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhrs\" (UniqueName: \"kubernetes.io/projected/9c7b3597-a966-4e45-9e80-dac6ae4a49eb-kube-api-access-7mhrs\") pod \"oauth-openshift-6969b58588-z5d6p\" (UID: \"9c7b3597-a966-4e45-9e80-dac6ae4a49eb\") " pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.151457 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.330936 5104 ???:1] "http: TLS handshake error from 192.168.126.11:54678: no serving certificate available for the kubelet" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.389406 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6969b58588-z5d6p"] Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.616067 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" event={"ID":"c47b4509-0bb1-4360-9db3-29ebfcd734e3","Type":"ContainerDied","Data":"8765d8b13fbc965d68b29dcd8d2dfd68578d3842f074689b719b57978f5048c4"} Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.616133 5104 scope.go:117] "RemoveContainer" containerID="0ab9b2bb77fcaead421f25524b00b1e84579a0a28da49dbf7861e4ab78eb4ada" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.616077 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-g766x" Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.619609 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" event={"ID":"9c7b3597-a966-4e45-9e80-dac6ae4a49eb","Type":"ContainerStarted","Data":"7091e1ba74ef4d5c56efb20b1b2efbdc0d77a93232a46141bd2498649b23cb2c"} Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.639317 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:13:50 crc kubenswrapper[5104]: I0130 00:13:50.647701 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-g766x"] Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.162938 5104 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171035 5104 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171091 5104 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171258 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171753 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e" gracePeriod=15 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171749 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1" gracePeriod=15 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171792 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b" gracePeriod=15 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171757 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12" gracePeriod=15 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.171780 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4" gracePeriod=15 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172057 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172075 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172086 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172093 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172107 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172114 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172124 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172132 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172140 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172146 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172165 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172171 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172181 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172187 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172208 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172215 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172304 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172314 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172324 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172335 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172344 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172353 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172365 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172374 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172467 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172476 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172486 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172492 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.172612 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.175041 5104 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.227712 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229542 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229633 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229703 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229749 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229811 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229893 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.229971 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.230094 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.230194 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.230283 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.243470 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332473 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332543 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332573 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332600 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332622 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332635 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332684 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332720 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332741 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.332759 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333167 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333203 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333225 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333235 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333256 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333269 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333290 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333333 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333352 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.333507 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.540223 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:51 crc kubenswrapper[5104]: W0130 00:13:51.567926 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-31c3172cfb0d2d99ef34476d25b9bc6c6edcc57500ed370b38b80916cbe7e1b9 WatchSource:0}: Error finding container 31c3172cfb0d2d99ef34476d25b9bc6c6edcc57500ed370b38b80916cbe7e1b9: Status 404 returned error can't find the container with id 31c3172cfb0d2d99ef34476d25b9bc6c6edcc57500ed370b38b80916cbe7e1b9 Jan 30 00:13:51 crc kubenswrapper[5104]: E0130 00:13:51.585292 5104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59e8022abb8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,LastTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.628360 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"31c3172cfb0d2d99ef34476d25b9bc6c6edcc57500ed370b38b80916cbe7e1b9"} Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.629909 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" event={"ID":"9c7b3597-a966-4e45-9e80-dac6ae4a49eb","Type":"ContainerStarted","Data":"248079022b510e95bfac1e3644a533eed214425576e26cbc270f96979b6df4ee"} Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.631934 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.633317 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.634034 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12" exitCode=0 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.634054 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b" exitCode=0 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.634061 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e" exitCode=0 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.634067 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4" exitCode=2 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.634141 5104 scope.go:117] "RemoveContainer" containerID="a59bc6c54fddbc4eea03ba9234e68465071e0ae39793c65b4dee93d13d3d8fc2" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.637068 5104 generic.go:358] "Generic (PLEG): container finished" podID="60001511-f1e7-4c9e-9c1c-812709496c6c" containerID="69d5cb6cb4ac809645a02f0ecbb666f5d1b674c1adf1fa4600484a216213d523" exitCode=0 Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.637111 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"60001511-f1e7-4c9e-9c1c-812709496c6c","Type":"ContainerDied","Data":"69d5cb6cb4ac809645a02f0ecbb666f5d1b674c1adf1fa4600484a216213d523"} Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.637704 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5104]: I0130 00:13:51.637948 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.042110 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.042870 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.043393 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.043682 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.047246 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.047789 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.048415 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.049050 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.543014 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47b4509-0bb1-4360-9db3-29ebfcd734e3" path="/var/lib/kubelet/pods/c47b4509-0bb1-4360-9db3-29ebfcd734e3/volumes" Jan 30 00:13:52 crc kubenswrapper[5104]: E0130 00:13:52.626339 5104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59e8022abb8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,LastTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.647121 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a"} Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.647763 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.648661 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.649648 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.652175 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.943533 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.944598 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.945238 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5104]: I0130 00:13:52.945918 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.062593 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock\") pod \"60001511-f1e7-4c9e-9c1c-812709496c6c\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.062643 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access\") pod \"60001511-f1e7-4c9e-9c1c-812709496c6c\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.062706 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock" (OuterVolumeSpecName: "var-lock") pod "60001511-f1e7-4c9e-9c1c-812709496c6c" (UID: "60001511-f1e7-4c9e-9c1c-812709496c6c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.062823 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir\") pod \"60001511-f1e7-4c9e-9c1c-812709496c6c\" (UID: \"60001511-f1e7-4c9e-9c1c-812709496c6c\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.063048 5104 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.063096 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60001511-f1e7-4c9e-9c1c-812709496c6c" (UID: "60001511-f1e7-4c9e-9c1c-812709496c6c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.071635 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60001511-f1e7-4c9e-9c1c-812709496c6c" (UID: "60001511-f1e7-4c9e-9c1c-812709496c6c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.164318 5104 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60001511-f1e7-4c9e-9c1c-812709496c6c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.164381 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60001511-f1e7-4c9e-9c1c-812709496c6c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.664444 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.664439 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"60001511-f1e7-4c9e-9c1c-812709496c6c","Type":"ContainerDied","Data":"118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab"} Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.665223 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118b8c4eded6c99581ca2467bab0fb89863f6f80a8cf311dab20e36219281aab" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.665228 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.666640 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.667369 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.667685 5104 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.668168 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.668532 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.668995 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.671405 5104 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1" exitCode=0 Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.671507 5104 scope.go:117] "RemoveContainer" containerID="ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.690967 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.691186 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.691378 5104 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.691579 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.693877 5104 scope.go:117] "RemoveContainer" containerID="cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.713563 5104 scope.go:117] "RemoveContainer" containerID="6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.734553 5104 scope.go:117] "RemoveContainer" containerID="341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.758454 5104 scope.go:117] "RemoveContainer" containerID="cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771230 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771295 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771364 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771399 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771468 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771501 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771665 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.771608 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.772205 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.772497 5104 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.772532 5104 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.772548 5104 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.772567 5104 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.776646 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.778987 5104 scope.go:117] "RemoveContainer" containerID="ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.850750 5104 scope.go:117] "RemoveContainer" containerID="ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.851417 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12\": container with ID starting with ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12 not found: ID does not exist" containerID="ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.851476 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12"} err="failed to get container status \"ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12\": rpc error: code = NotFound desc = could not find container \"ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12\": container with ID starting with ba201d512edd4ea081c45c6b965e415a70015052f56b44640d9d6f3f294f3c12 not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.851522 5104 scope.go:117] "RemoveContainer" containerID="cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.852207 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\": container with ID starting with cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b not found: ID does not exist" containerID="cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.852252 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b"} err="failed to get container status \"cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\": rpc error: code = NotFound desc = could not find container \"cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b\": container with ID starting with cb67eb59e5fa97f3ac0f355c63297316d06ab76329d05baadeb90ba933d0299b not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.852278 5104 scope.go:117] "RemoveContainer" containerID="6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.852607 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\": container with ID starting with 6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e not found: ID does not exist" containerID="6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.852634 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e"} err="failed to get container status \"6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\": rpc error: code = NotFound desc = could not find container \"6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e\": container with ID starting with 6edbf8d3caa46b1b8204f581c4ee351245b3a0569a7dc860e8eebd05c21de73e not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.852653 5104 scope.go:117] "RemoveContainer" containerID="341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.853072 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\": container with ID starting with 341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4 not found: ID does not exist" containerID="341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.853108 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4"} err="failed to get container status \"341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\": rpc error: code = NotFound desc = could not find container \"341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4\": container with ID starting with 341f0f24fd96be5b40281bed5ebcb965c115891201881ea7fca2d25b621efcf4 not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.853133 5104 scope.go:117] "RemoveContainer" containerID="cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.853404 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\": container with ID starting with cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1 not found: ID does not exist" containerID="cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.853424 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1"} err="failed to get container status \"cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\": rpc error: code = NotFound desc = could not find container \"cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1\": container with ID starting with cefeb3f03767c76f93f967f91a3a91beb76d605eca9cbc8c1511e20275afe6f1 not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.853436 5104 scope.go:117] "RemoveContainer" containerID="ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591" Jan 30 00:13:53 crc kubenswrapper[5104]: E0130 00:13:53.853756 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\": container with ID starting with ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591 not found: ID does not exist" containerID="ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.853786 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591"} err="failed to get container status \"ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\": rpc error: code = NotFound desc = could not find container \"ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591\": container with ID starting with ae33bfa613b46c1b72972cced863e6209b8e9eafade2bfbe6cd19d2fb5ee3591 not found: ID does not exist" Jan 30 00:13:53 crc kubenswrapper[5104]: I0130 00:13:53.873765 5104 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.535461 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.681068 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.682047 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.682304 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.682596 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.682991 5104 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.686637 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.686961 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.687347 5104 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.687709 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.787106 5104 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.787576 5104 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.788003 5104 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.788357 5104 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.788766 5104 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5104]: I0130 00:13:54.788805 5104 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.789133 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Jan 30 00:13:54 crc kubenswrapper[5104]: E0130 00:13:54.990724 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Jan 30 00:13:55 crc kubenswrapper[5104]: E0130 00:13:55.392373 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Jan 30 00:13:56 crc kubenswrapper[5104]: E0130 00:13:56.193400 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Jan 30 00:13:57 crc kubenswrapper[5104]: E0130 00:13:57.794793 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Jan 30 00:14:00 crc kubenswrapper[5104]: I0130 00:14:00.533041 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:00 crc kubenswrapper[5104]: I0130 00:14:00.533396 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:00 crc kubenswrapper[5104]: I0130 00:14:00.533611 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:00 crc kubenswrapper[5104]: E0130 00:14:00.606973 5104 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" volumeName="registry-storage" Jan 30 00:14:00 crc kubenswrapper[5104]: E0130 00:14:00.996650 5104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="6.4s" Jan 30 00:14:02 crc kubenswrapper[5104]: E0130 00:14:02.627996 5104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59e8022abb8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,LastTimestamp:2026-01-30 00:13:51.583533966 +0000 UTC m=+212.315873225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.525517 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.526697 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.527451 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.528098 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.550393 5104 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.550606 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:03 crc kubenswrapper[5104]: E0130 00:14:03.551198 5104 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.551677 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5104]: I0130 00:14:03.743875 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"139fcd3f4f5436127941ff3eb5fcbc719b5b7836e4094b66fea66fae9bdaca79"} Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.755268 5104 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="2882962dd479a5ef4b2436d2ac11b0c3277541ebb7b216459dd508887fcf7497" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.755717 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"2882962dd479a5ef4b2436d2ac11b0c3277541ebb7b216459dd508887fcf7497"} Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.756316 5104 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.756349 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.757188 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: E0130 00:14:04.757250 5104 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.757604 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.757995 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.762045 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.762120 5104 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd" exitCode=1 Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.762267 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd"} Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.763466 5104 scope.go:117] "RemoveContainer" containerID="b8f7b53bbb2fea415aa6f8cab552a634e497844f09ceab42a0dccba0cc0d62fd" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.763795 5104 status_manager.go:895] "Failed to get status for pod" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.764503 5104 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.765077 5104 status_manager.go:895] "Failed to get status for pod" podUID="9c7b3597-a966-4e45-9e80-dac6ae4a49eb" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6969b58588-z5d6p\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5104]: I0130 00:14:04.765576 5104 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.184:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5104]: I0130 00:14:05.778030 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3c668915bd6b1922d0b1db06c3f10813d766b3943397d6051fc1ac5ecd71af43"} Jan 30 00:14:05 crc kubenswrapper[5104]: I0130 00:14:05.778496 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1d70c30917d225dd7e97c75382af986a3494ebd7e2f11d2e5ba1f9cade38f091"} Jan 30 00:14:05 crc kubenswrapper[5104]: I0130 00:14:05.781288 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:05 crc kubenswrapper[5104]: I0130 00:14:05.781457 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4fc74766481ce349901adaf615ea42ef157fc2037c76643b29ce3098c519dfcd"} Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.791673 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ddfde772f88d5c9b73316c6664965caf3b4cc0bd861bebeec2eade32d33344f4"} Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.791932 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.791949 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c52ed3a74dc3584a1f7e0f221fada2fdc6bade626fd5167cd81c229b46a1e3d1"} Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.791962 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e325495f1296b4de9c5bc5079f80bdb093f7f1c5ac9e1650ea22759aa3c00624"} Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.792035 5104 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:06 crc kubenswrapper[5104]: I0130 00:14:06.792062 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:07 crc kubenswrapper[5104]: I0130 00:14:07.006150 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:07 crc kubenswrapper[5104]: I0130 00:14:07.016176 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:07 crc kubenswrapper[5104]: I0130 00:14:07.796357 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:08 crc kubenswrapper[5104]: I0130 00:14:08.552447 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:08 crc kubenswrapper[5104]: I0130 00:14:08.552531 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:08 crc kubenswrapper[5104]: I0130 00:14:08.560935 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.799874 5104 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.800204 5104 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.828790 5104 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.828825 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.833469 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:11 crc kubenswrapper[5104]: I0130 00:14:11.837769 5104 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d2b02d26-a211-4d89-abb8-6232ea401fcf" Jan 30 00:14:12 crc kubenswrapper[5104]: I0130 00:14:12.834787 5104 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:12 crc kubenswrapper[5104]: I0130 00:14:12.835730 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a53efdae-bb47-4e91-8fd9-aa3ce42e07fe" Jan 30 00:14:14 crc kubenswrapper[5104]: I0130 00:14:14.949958 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:14 crc kubenswrapper[5104]: I0130 00:14:14.950074 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5104]: I0130 00:14:18.810419 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:20 crc kubenswrapper[5104]: I0130 00:14:20.550107 5104 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d2b02d26-a211-4d89-abb8-6232ea401fcf" Jan 30 00:14:21 crc kubenswrapper[5104]: I0130 00:14:21.375751 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:14:22 crc kubenswrapper[5104]: I0130 00:14:22.086351 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:14:22 crc kubenswrapper[5104]: I0130 00:14:22.134821 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:14:22 crc kubenswrapper[5104]: I0130 00:14:22.244081 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:14:22 crc kubenswrapper[5104]: I0130 00:14:22.857678 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:22 crc kubenswrapper[5104]: I0130 00:14:22.881560 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.348673 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.382731 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.428672 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.480400 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.771963 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.809702 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:14:23 crc kubenswrapper[5104]: I0130 00:14:23.856873 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.072267 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.178714 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.295215 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.314761 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.435541 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.544172 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.581442 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.743960 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.785802 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:14:24 crc kubenswrapper[5104]: I0130 00:14:24.969206 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.095785 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.128890 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.177716 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.265957 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.424942 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.446456 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.717771 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.729322 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:14:25 crc kubenswrapper[5104]: I0130 00:14:25.965692 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.023229 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.107201 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.196195 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.244740 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.251083 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.311476 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.363897 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.665820 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.725007 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.784803 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.838619 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.934150 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:14:26 crc kubenswrapper[5104]: I0130 00:14:26.957999 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.054369 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.063778 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.073598 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.201455 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.281915 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.326789 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.418606 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.456252 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.534934 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.653180 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.683325 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.728345 5104 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.740515 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.741500 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.765918 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.847106 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.858012 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.860814 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.924501 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5104]: I0130 00:14:27.952151 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.109592 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.132556 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.280547 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.282889 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.286674 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.289606 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.293619 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.380809 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.381457 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.471177 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.490368 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.576482 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.631054 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.744900 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.763654 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.790476 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.870664 5104 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.916347 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.922123 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.924137 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:14:28 crc kubenswrapper[5104]: I0130 00:14:28.938587 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.025449 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.027987 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.033340 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.132491 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.173316 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.280990 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.320220 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.395432 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.493338 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.513649 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.675122 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.676776 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.683695 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.713385 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.734179 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.867824 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.922570 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.937803 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.969001 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:14:29 crc kubenswrapper[5104]: I0130 00:14:29.992075 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.041302 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.051847 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.158819 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.303931 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.305658 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.307980 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.341374 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.361887 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.379540 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.430365 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.458023 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.459435 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.514880 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.605921 5104 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.607917 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.607905239 podStartE2EDuration="39.607905239s" podCreationTimestamp="2026-01-30 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.58925149 +0000 UTC m=+232.321590709" watchObservedRunningTime="2026-01-30 00:14:30.607905239 +0000 UTC m=+251.340244458" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.610316 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6969b58588-z5d6p" podStartSLOduration=66.610311274 podStartE2EDuration="1m6.610311274s" podCreationTimestamp="2026-01-30 00:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:11.53564064 +0000 UTC m=+232.267979869" watchObservedRunningTime="2026-01-30 00:14:30.610311274 +0000 UTC m=+251.342650493" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.610534 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.610562 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.615307 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.629469 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.629455209 podStartE2EDuration="19.629455209s" podCreationTimestamp="2026-01-30 00:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:30.626451908 +0000 UTC m=+251.358791127" watchObservedRunningTime="2026-01-30 00:14:30.629455209 +0000 UTC m=+251.361794428" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.785517 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.808385 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:14:30 crc kubenswrapper[5104]: I0130 00:14:30.850885 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.029519 5104 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.037955 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.044482 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.147735 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.176595 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.382555 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.396271 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.408728 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.439816 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.449760 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.475557 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.503199 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.650311 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.676117 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.721662 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.774546 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.872787 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:14:31 crc kubenswrapper[5104]: I0130 00:14:31.917268 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.030091 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.032790 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.168315 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.299131 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.356256 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.361878 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.364348 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.396230 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.464288 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.479671 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.488951 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.502544 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.534687 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.639213 5104 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.748799 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.759511 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.811470 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.849140 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.904043 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5104]: I0130 00:14:32.992205 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.110486 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.117589 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.126392 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.191012 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.196400 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.205193 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.259814 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.445997 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.548468 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.552548 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.566906 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.720098 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.747287 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.754785 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.766497 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.785108 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.826728 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.959733 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5104]: I0130 00:14:33.966115 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.016724 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.038653 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.043457 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.066989 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.147756 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.161739 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.194899 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.225010 5104 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.225325 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a" gracePeriod=5 Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.329151 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.335792 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.338378 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.353083 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.360110 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.463315 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.512420 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.551004 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.551330 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.568623 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.584232 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.626228 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.742997 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.746038 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.852000 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.929520 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.938357 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5104]: I0130 00:14:34.961762 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.012081 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.032704 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.038106 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.070636 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.209889 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.218570 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.333928 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.354160 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.365093 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.408695 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.516641 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.546468 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.546814 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.667626 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.736664 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.792655 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.913247 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.923926 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.947947 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:35 crc kubenswrapper[5104]: I0130 00:14:35.991027 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.048908 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.192732 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.206341 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.209373 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.256099 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.413569 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.425177 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.623306 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.639358 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.714722 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.757622 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:14:36 crc kubenswrapper[5104]: I0130 00:14:36.882580 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.199068 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.246510 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.406776 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.569469 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.834736 5104 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:37 crc kubenswrapper[5104]: I0130 00:14:37.929265 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5104]: I0130 00:14:38.009355 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5104]: I0130 00:14:38.157903 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:14:38 crc kubenswrapper[5104]: I0130 00:14:38.878308 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.017731 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.084124 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.339255 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.453813 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.795129 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.799793 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.799897 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927599 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927747 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927828 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927867 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927897 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.927958 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.928241 5104 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.928313 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.928349 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.928370 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.944785 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:39 crc kubenswrapper[5104]: I0130 00:14:39.990463 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.002125 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.002207 5104 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a" exitCode=137 Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.002307 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.002317 5104 scope.go:117] "RemoveContainer" containerID="dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.023617 5104 scope.go:117] "RemoveContainer" containerID="dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a" Jan 30 00:14:40 crc kubenswrapper[5104]: E0130 00:14:40.024137 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a\": container with ID starting with dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a not found: ID does not exist" containerID="dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.024183 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a"} err="failed to get container status \"dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a\": rpc error: code = NotFound desc = could not find container \"dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a\": container with ID starting with dbe5af60972042b4bdbda2d4662a9939ef5764142d0a511bc63c7b9898f5e77a not found: ID does not exist" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.029913 5104 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.029955 5104 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.029967 5104 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.029981 5104 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.168341 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.538295 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.538598 5104 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.553119 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.553149 5104 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3458e870-4c52-46ba-86f1-68a3f7668b69" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.558896 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.559958 5104 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3458e870-4c52-46ba-86f1-68a3f7668b69" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.731405 5104 ???:1] "http: TLS handshake error from 192.168.126.11:46648: no serving certificate available for the kubelet" Jan 30 00:14:40 crc kubenswrapper[5104]: I0130 00:14:40.742461 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:44 crc kubenswrapper[5104]: I0130 00:14:44.949948 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:44 crc kubenswrapper[5104]: I0130 00:14:44.950480 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:59 crc kubenswrapper[5104]: I0130 00:14:59.146758 5104 generic.go:358] "Generic (PLEG): container finished" podID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerID="e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0" exitCode=0 Jan 30 00:14:59 crc kubenswrapper[5104]: I0130 00:14:59.146868 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerDied","Data":"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0"} Jan 30 00:14:59 crc kubenswrapper[5104]: I0130 00:14:59.148289 5104 scope.go:117] "RemoveContainer" containerID="e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0" Jan 30 00:14:59 crc kubenswrapper[5104]: I0130 00:14:59.807835 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.180141 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerStarted","Data":"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a"} Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.180507 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.183191 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.196762 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx"] Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197373 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" containerName="installer" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197395 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" containerName="installer" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197405 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197410 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197508 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="60001511-f1e7-4c9e-9c1c-812709496c6c" containerName="installer" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.197519 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.201357 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.203203 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.208388 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx"] Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.210471 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.303461 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.303524 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5c7s\" (UniqueName: \"kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.303594 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.404632 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5c7s\" (UniqueName: \"kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.404787 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.404916 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.406385 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.413741 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.423352 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5c7s\" (UniqueName: \"kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s\") pod \"collect-profiles-29495535-nrqpx\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.524224 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:00 crc kubenswrapper[5104]: I0130 00:15:00.977344 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx"] Jan 30 00:15:00 crc kubenswrapper[5104]: W0130 00:15:00.989942 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba08799_78ef_42d0_aaf0_0247ab99e81b.slice/crio-5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54 WatchSource:0}: Error finding container 5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54: Status 404 returned error can't find the container with id 5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54 Jan 30 00:15:01 crc kubenswrapper[5104]: I0130 00:15:01.187896 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" event={"ID":"fba08799-78ef-42d0-aaf0-0247ab99e81b","Type":"ContainerStarted","Data":"99c011dae7ab0a059389c2bb9ee773bbe7e5fcfc4632e2a905600107689b8a3e"} Jan 30 00:15:01 crc kubenswrapper[5104]: I0130 00:15:01.187946 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" event={"ID":"fba08799-78ef-42d0-aaf0-0247ab99e81b","Type":"ContainerStarted","Data":"5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54"} Jan 30 00:15:02 crc kubenswrapper[5104]: I0130 00:15:02.196565 5104 generic.go:358] "Generic (PLEG): container finished" podID="fba08799-78ef-42d0-aaf0-0247ab99e81b" containerID="99c011dae7ab0a059389c2bb9ee773bbe7e5fcfc4632e2a905600107689b8a3e" exitCode=0 Jan 30 00:15:02 crc kubenswrapper[5104]: I0130 00:15:02.196757 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" event={"ID":"fba08799-78ef-42d0-aaf0-0247ab99e81b","Type":"ContainerDied","Data":"99c011dae7ab0a059389c2bb9ee773bbe7e5fcfc4632e2a905600107689b8a3e"} Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.405107 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.452391 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5c7s\" (UniqueName: \"kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s\") pod \"fba08799-78ef-42d0-aaf0-0247ab99e81b\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.452455 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume\") pod \"fba08799-78ef-42d0-aaf0-0247ab99e81b\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.452769 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume\") pod \"fba08799-78ef-42d0-aaf0-0247ab99e81b\" (UID: \"fba08799-78ef-42d0-aaf0-0247ab99e81b\") " Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.453189 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume" (OuterVolumeSpecName: "config-volume") pod "fba08799-78ef-42d0-aaf0-0247ab99e81b" (UID: "fba08799-78ef-42d0-aaf0-0247ab99e81b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.453338 5104 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fba08799-78ef-42d0-aaf0-0247ab99e81b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.457784 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fba08799-78ef-42d0-aaf0-0247ab99e81b" (UID: "fba08799-78ef-42d0-aaf0-0247ab99e81b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.457931 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s" (OuterVolumeSpecName: "kube-api-access-v5c7s") pod "fba08799-78ef-42d0-aaf0-0247ab99e81b" (UID: "fba08799-78ef-42d0-aaf0-0247ab99e81b"). InnerVolumeSpecName "kube-api-access-v5c7s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.554447 5104 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fba08799-78ef-42d0-aaf0-0247ab99e81b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:03 crc kubenswrapper[5104]: I0130 00:15:03.554499 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5c7s\" (UniqueName: \"kubernetes.io/projected/fba08799-78ef-42d0-aaf0-0247ab99e81b-kube-api-access-v5c7s\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:04 crc kubenswrapper[5104]: I0130 00:15:04.210815 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" event={"ID":"fba08799-78ef-42d0-aaf0-0247ab99e81b","Type":"ContainerDied","Data":"5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54"} Jan 30 00:15:04 crc kubenswrapper[5104]: I0130 00:15:04.210912 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5816b37c005f6ed23e6014f0341ade983799133d752ab06fca31214216c7aa54" Jan 30 00:15:04 crc kubenswrapper[5104]: I0130 00:15:04.210911 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-nrqpx" Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.687363 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.688139 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" containerID="cri-o://b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3" gracePeriod=30 Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.701278 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.702476 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerName="route-controller-manager" containerID="cri-o://2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c" gracePeriod=30 Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.767892 5104 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-l7gdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:15:11 crc kubenswrapper[5104]: I0130 00:15:11.768003 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.031654 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.036245 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.068077 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.073965 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerName="route-controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074004 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerName="route-controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074079 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fba08799-78ef-42d0-aaf0-0247ab99e81b" containerName="collect-profiles" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074088 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba08799-78ef-42d0-aaf0-0247ab99e81b" containerName="collect-profiles" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074101 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074108 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074278 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerName="route-controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074298 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerName="controller-manager" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.074312 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="fba08799-78ef-42d0-aaf0-0247ab99e81b" containerName="collect-profiles" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.083108 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.083272 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.086625 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.090987 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.094826 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189042 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189398 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189472 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189507 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms24b\" (UniqueName: \"kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189558 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp\") pod \"ff629e62-b58e-4d85-aa96-fbc1845b304b\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189619 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189654 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca\") pod \"ff629e62-b58e-4d85-aa96-fbc1845b304b\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189690 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrnfz\" (UniqueName: \"kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz\") pod \"ff629e62-b58e-4d85-aa96-fbc1845b304b\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189709 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config\") pod \"ff629e62-b58e-4d85-aa96-fbc1845b304b\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189740 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles\") pod \"df0257f9-bd1a-4915-8db4-aec4ffda4826\" (UID: \"df0257f9-bd1a-4915-8db4-aec4ffda4826\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189765 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert\") pod \"ff629e62-b58e-4d85-aa96-fbc1845b304b\" (UID: \"ff629e62-b58e-4d85-aa96-fbc1845b304b\") " Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.189974 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190064 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jvf\" (UniqueName: \"kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190142 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190176 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190200 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190228 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190254 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190285 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190355 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190491 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp" (OuterVolumeSpecName: "tmp") pod "ff629e62-b58e-4d85-aa96-fbc1845b304b" (UID: "ff629e62-b58e-4d85-aa96-fbc1845b304b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190630 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp" (OuterVolumeSpecName: "tmp") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190811 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190870 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4jq\" (UniqueName: \"kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191060 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/df0257f9-bd1a-4915-8db4-aec4ffda4826-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191098 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff629e62-b58e-4d85-aa96-fbc1845b304b-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.190105 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config" (OuterVolumeSpecName: "config") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191209 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff629e62-b58e-4d85-aa96-fbc1845b304b" (UID: "ff629e62-b58e-4d85-aa96-fbc1845b304b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191230 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191440 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config" (OuterVolumeSpecName: "config") pod "ff629e62-b58e-4d85-aa96-fbc1845b304b" (UID: "ff629e62-b58e-4d85-aa96-fbc1845b304b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.191641 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca" (OuterVolumeSpecName: "client-ca") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.196096 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff629e62-b58e-4d85-aa96-fbc1845b304b" (UID: "ff629e62-b58e-4d85-aa96-fbc1845b304b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.199355 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.199411 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b" (OuterVolumeSpecName: "kube-api-access-ms24b") pod "df0257f9-bd1a-4915-8db4-aec4ffda4826" (UID: "df0257f9-bd1a-4915-8db4-aec4ffda4826"). InnerVolumeSpecName "kube-api-access-ms24b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.201205 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz" (OuterVolumeSpecName: "kube-api-access-zrnfz") pod "ff629e62-b58e-4d85-aa96-fbc1845b304b" (UID: "ff629e62-b58e-4d85-aa96-fbc1845b304b"). InnerVolumeSpecName "kube-api-access-zrnfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.265332 5104 generic.go:358] "Generic (PLEG): container finished" podID="df0257f9-bd1a-4915-8db4-aec4ffda4826" containerID="b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3" exitCode=0 Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.265438 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" event={"ID":"df0257f9-bd1a-4915-8db4-aec4ffda4826","Type":"ContainerDied","Data":"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3"} Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.265453 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.265478 5104 scope.go:117] "RemoveContainer" containerID="b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.265467 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-l7gdh" event={"ID":"df0257f9-bd1a-4915-8db4-aec4ffda4826","Type":"ContainerDied","Data":"b33358b75e0aa79bf6d317db840bb26bc1782e576f6a9b2cc11b1f35e34063c2"} Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.268770 5104 generic.go:358] "Generic (PLEG): container finished" podID="ff629e62-b58e-4d85-aa96-fbc1845b304b" containerID="2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c" exitCode=0 Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.268993 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" event={"ID":"ff629e62-b58e-4d85-aa96-fbc1845b304b","Type":"ContainerDied","Data":"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c"} Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.269030 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" event={"ID":"ff629e62-b58e-4d85-aa96-fbc1845b304b","Type":"ContainerDied","Data":"eb7550f4e431003bb67113687f3142c13f17529aa85082ce1bb3423350829ff7"} Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.268997 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.284599 5104 ???:1] "http: TLS handshake error from 192.168.126.11:59096: no serving certificate available for the kubelet" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.292654 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.292715 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.293842 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.293941 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.293978 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294033 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294145 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294278 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294318 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tt4jq\" (UniqueName: \"kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294416 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294536 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jvf\" (UniqueName: \"kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294618 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294667 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294679 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0257f9-bd1a-4915-8db4-aec4ffda4826-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294710 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ms24b\" (UniqueName: \"kubernetes.io/projected/df0257f9-bd1a-4915-8db4-aec4ffda4826-kube-api-access-ms24b\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294722 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294731 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294739 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrnfz\" (UniqueName: \"kubernetes.io/projected/ff629e62-b58e-4d85-aa96-fbc1845b304b-kube-api-access-zrnfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294748 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff629e62-b58e-4d85-aa96-fbc1845b304b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294756 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0257f9-bd1a-4915-8db4-aec4ffda4826-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.294765 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff629e62-b58e-4d85-aa96-fbc1845b304b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.296884 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.297956 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.298332 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.298616 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.298720 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.300205 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.303327 5104 scope.go:117] "RemoveContainer" containerID="b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.303503 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:15:12 crc kubenswrapper[5104]: E0130 00:15:12.304094 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3\": container with ID starting with b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3 not found: ID does not exist" containerID="b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.304152 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3"} err="failed to get container status \"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3\": rpc error: code = NotFound desc = could not find container \"b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3\": container with ID starting with b54cc5a542dfa3209f1d5177015a29e5cdb0a438e60ec01168813800d448a4e3 not found: ID does not exist" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.304190 5104 scope.go:117] "RemoveContainer" containerID="2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.305488 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.308253 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-c5tsr"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.311282 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.311655 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.314922 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-l7gdh"] Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.319553 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt4jq\" (UniqueName: \"kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq\") pod \"route-controller-manager-647ff7f58-ftng5\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.319819 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jvf\" (UniqueName: \"kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf\") pod \"controller-manager-5cf8df6c94-qhgxh\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.328607 5104 scope.go:117] "RemoveContainer" containerID="2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c" Jan 30 00:15:12 crc kubenswrapper[5104]: E0130 00:15:12.329160 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c\": container with ID starting with 2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c not found: ID does not exist" containerID="2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.329338 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c"} err="failed to get container status \"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c\": rpc error: code = NotFound desc = could not find container \"2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c\": container with ID starting with 2994680e1a5a26eec666f4aa8261e2498488b22bc4469e01c7cd3f098b69a32c not found: ID does not exist" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.415078 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.428719 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.533303 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0257f9-bd1a-4915-8db4-aec4ffda4826" path="/var/lib/kubelet/pods/df0257f9-bd1a-4915-8db4-aec4ffda4826/volumes" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.534545 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff629e62-b58e-4d85-aa96-fbc1845b304b" path="/var/lib/kubelet/pods/ff629e62-b58e-4d85-aa96-fbc1845b304b/volumes" Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.636178 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:12 crc kubenswrapper[5104]: W0130 00:15:12.644095 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96fa7b64_44c0_44b2_be6b_a0e31861888b.slice/crio-cdf15c30be1d297c0bdcc094f174059066aabd87c2084e260b49ca451655af8d WatchSource:0}: Error finding container cdf15c30be1d297c0bdcc094f174059066aabd87c2084e260b49ca451655af8d: Status 404 returned error can't find the container with id cdf15c30be1d297c0bdcc094f174059066aabd87c2084e260b49ca451655af8d Jan 30 00:15:12 crc kubenswrapper[5104]: I0130 00:15:12.676883 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.275840 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" event={"ID":"96fa7b64-44c0-44b2-be6b-a0e31861888b","Type":"ContainerStarted","Data":"6b7b705a7d7589e8188bb0f5a9f9693ef9ba607a6d209d83645b4d9a34d78245"} Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.276183 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.276198 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" event={"ID":"96fa7b64-44c0-44b2-be6b-a0e31861888b","Type":"ContainerStarted","Data":"cdf15c30be1d297c0bdcc094f174059066aabd87c2084e260b49ca451655af8d"} Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.284677 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" event={"ID":"d2fa15a9-393b-425f-8093-7cd53c9cb15e","Type":"ContainerStarted","Data":"92c98f6be3d7db63f211809375e9453461fc8dc672a6bd8cb3705086ae0c69f9"} Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.284717 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" event={"ID":"d2fa15a9-393b-425f-8093-7cd53c9cb15e","Type":"ContainerStarted","Data":"d800750513e04cf830e6ad4a8588fac3be5be3c1459b2d7154ecd8dfd0606ba5"} Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.285121 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.288816 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.362151 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" podStartSLOduration=2.362129656 podStartE2EDuration="2.362129656s" podCreationTimestamp="2026-01-30 00:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:13.332255973 +0000 UTC m=+294.064595212" watchObservedRunningTime="2026-01-30 00:15:13.362129656 +0000 UTC m=+294.094468875" Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.395642 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" podStartSLOduration=2.395625219 podStartE2EDuration="2.395625219s" podCreationTimestamp="2026-01-30 00:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:13.393704581 +0000 UTC m=+294.126043800" watchObservedRunningTime="2026-01-30 00:15:13.395625219 +0000 UTC m=+294.127964438" Jan 30 00:15:13 crc kubenswrapper[5104]: I0130 00:15:13.587047 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:14 crc kubenswrapper[5104]: I0130 00:15:14.949914 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:15:14 crc kubenswrapper[5104]: I0130 00:15:14.950086 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:15:14 crc kubenswrapper[5104]: I0130 00:15:14.950215 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:15:14 crc kubenswrapper[5104]: I0130 00:15:14.951349 5104 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6"} pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:15:14 crc kubenswrapper[5104]: I0130 00:15:14.951468 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" containerID="cri-o://f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6" gracePeriod=600 Jan 30 00:15:15 crc kubenswrapper[5104]: I0130 00:15:15.313043 5104 generic.go:358] "Generic (PLEG): container finished" podID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerID="f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6" exitCode=0 Jan 30 00:15:15 crc kubenswrapper[5104]: I0130 00:15:15.313159 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerDied","Data":"f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6"} Jan 30 00:15:16 crc kubenswrapper[5104]: I0130 00:15:16.321947 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a"} Jan 30 00:15:20 crc kubenswrapper[5104]: I0130 00:15:20.707484 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:20 crc kubenswrapper[5104]: I0130 00:15:20.707484 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:28 crc kubenswrapper[5104]: I0130 00:15:28.518451 5104 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.055580 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.055891 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" podUID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" containerName="controller-manager" containerID="cri-o://92c98f6be3d7db63f211809375e9453461fc8dc672a6bd8cb3705086ae0c69f9" gracePeriod=30 Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.060902 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.061549 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" podUID="96fa7b64-44c0-44b2-be6b-a0e31861888b" containerName="route-controller-manager" containerID="cri-o://6b7b705a7d7589e8188bb0f5a9f9693ef9ba607a6d209d83645b4d9a34d78245" gracePeriod=30 Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.410752 5104 generic.go:358] "Generic (PLEG): container finished" podID="96fa7b64-44c0-44b2-be6b-a0e31861888b" containerID="6b7b705a7d7589e8188bb0f5a9f9693ef9ba607a6d209d83645b4d9a34d78245" exitCode=0 Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.410843 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" event={"ID":"96fa7b64-44c0-44b2-be6b-a0e31861888b","Type":"ContainerDied","Data":"6b7b705a7d7589e8188bb0f5a9f9693ef9ba607a6d209d83645b4d9a34d78245"} Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.412601 5104 generic.go:358] "Generic (PLEG): container finished" podID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" containerID="92c98f6be3d7db63f211809375e9453461fc8dc672a6bd8cb3705086ae0c69f9" exitCode=0 Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.412655 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" event={"ID":"d2fa15a9-393b-425f-8093-7cd53c9cb15e","Type":"ContainerDied","Data":"92c98f6be3d7db63f211809375e9453461fc8dc672a6bd8cb3705086ae0c69f9"} Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.549620 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.579297 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.579911 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96fa7b64-44c0-44b2-be6b-a0e31861888b" containerName="route-controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.579933 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fa7b64-44c0-44b2-be6b-a0e31861888b" containerName="route-controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.580049 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="96fa7b64-44c0-44b2-be6b-a0e31861888b" containerName="route-controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.583700 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.594667 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686165 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config\") pod \"96fa7b64-44c0-44b2-be6b-a0e31861888b\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686267 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp\") pod \"96fa7b64-44c0-44b2-be6b-a0e31861888b\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686283 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert\") pod \"96fa7b64-44c0-44b2-be6b-a0e31861888b\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686310 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt4jq\" (UniqueName: \"kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq\") pod \"96fa7b64-44c0-44b2-be6b-a0e31861888b\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686357 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca\") pod \"96fa7b64-44c0-44b2-be6b-a0e31861888b\" (UID: \"96fa7b64-44c0-44b2-be6b-a0e31861888b\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686457 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686500 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxwp\" (UniqueName: \"kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686521 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686568 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.686739 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp" (OuterVolumeSpecName: "tmp") pod "96fa7b64-44c0-44b2-be6b-a0e31861888b" (UID: "96fa7b64-44c0-44b2-be6b-a0e31861888b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.687044 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.687150 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/96fa7b64-44c0-44b2-be6b-a0e31861888b-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.687321 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config" (OuterVolumeSpecName: "config") pod "96fa7b64-44c0-44b2-be6b-a0e31861888b" (UID: "96fa7b64-44c0-44b2-be6b-a0e31861888b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.687619 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca" (OuterVolumeSpecName: "client-ca") pod "96fa7b64-44c0-44b2-be6b-a0e31861888b" (UID: "96fa7b64-44c0-44b2-be6b-a0e31861888b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.691358 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96fa7b64-44c0-44b2-be6b-a0e31861888b" (UID: "96fa7b64-44c0-44b2-be6b-a0e31861888b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.702692 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq" (OuterVolumeSpecName: "kube-api-access-tt4jq") pod "96fa7b64-44c0-44b2-be6b-a0e31861888b" (UID: "96fa7b64-44c0-44b2-be6b-a0e31861888b"). InnerVolumeSpecName "kube-api-access-tt4jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.786187 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.787815 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khxwp\" (UniqueName: \"kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.787868 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.787901 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.787956 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.787983 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.788029 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fa7b64-44c0-44b2-be6b-a0e31861888b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.788040 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tt4jq\" (UniqueName: \"kubernetes.io/projected/96fa7b64-44c0-44b2-be6b-a0e31861888b-kube-api-access-tt4jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.788050 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.788059 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96fa7b64-44c0-44b2-be6b-a0e31861888b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.788556 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.789265 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.790152 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.793409 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.815149 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxwp\" (UniqueName: \"kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp\") pod \"route-controller-manager-7bc6fb6d58-xnrn9\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.816920 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.818226 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" containerName="controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.818249 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" containerName="controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.818898 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" containerName="controller-manager" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.825364 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.837839 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.889836 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.889991 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890037 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890128 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890208 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jvf\" (UniqueName: \"kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890249 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config\") pod \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\" (UID: \"d2fa15a9-393b-425f-8093-7cd53c9cb15e\") " Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890606 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp" (OuterVolumeSpecName: "tmp") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890733 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890808 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.890933 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config" (OuterVolumeSpecName: "config") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.893584 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf" (OuterVolumeSpecName: "kube-api-access-d4jvf") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "kube-api-access-d4jvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.893640 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2fa15a9-393b-425f-8093-7cd53c9cb15e" (UID: "d2fa15a9-393b-425f-8093-7cd53c9cb15e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:31 crc kubenswrapper[5104]: I0130 00:15:31.897961 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018525 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018573 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvw5b\" (UniqueName: \"kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018594 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018614 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018861 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018910 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018982 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2fa15a9-393b-425f-8093-7cd53c9cb15e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.018996 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4jvf\" (UniqueName: \"kubernetes.io/projected/d2fa15a9-393b-425f-8093-7cd53c9cb15e-kube-api-access-d4jvf\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.019005 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.019013 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.019021 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa15a9-393b-425f-8093-7cd53c9cb15e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.019029 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa15a9-393b-425f-8093-7cd53c9cb15e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.120608 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121050 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121085 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121114 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121143 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hvw5b\" (UniqueName: \"kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121171 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.121666 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.122183 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.122681 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.123436 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.123494 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: W0130 00:15:32.130258 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b35862a_cfe3_4de1_963a_211aa93379ce.slice/crio-a1cc8bf320fdfe26e30df47479e73c2b06b690968b1bdc1e2f6a671fbb6c90c1 WatchSource:0}: Error finding container a1cc8bf320fdfe26e30df47479e73c2b06b690968b1bdc1e2f6a671fbb6c90c1: Status 404 returned error can't find the container with id a1cc8bf320fdfe26e30df47479e73c2b06b690968b1bdc1e2f6a671fbb6c90c1 Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.130656 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.133394 5104 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.139827 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvw5b\" (UniqueName: \"kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b\") pod \"controller-manager-5c4c8c4f8d-d6mtr\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.420644 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.420636 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5" event={"ID":"96fa7b64-44c0-44b2-be6b-a0e31861888b","Type":"ContainerDied","Data":"cdf15c30be1d297c0bdcc094f174059066aabd87c2084e260b49ca451655af8d"} Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.420952 5104 scope.go:117] "RemoveContainer" containerID="6b7b705a7d7589e8188bb0f5a9f9693ef9ba607a6d209d83645b4d9a34d78245" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.423996 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" event={"ID":"d2fa15a9-393b-425f-8093-7cd53c9cb15e","Type":"ContainerDied","Data":"d800750513e04cf830e6ad4a8588fac3be5be3c1459b2d7154ecd8dfd0606ba5"} Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.424127 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.427248 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" event={"ID":"5b35862a-cfe3-4de1-963a-211aa93379ce","Type":"ContainerStarted","Data":"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80"} Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.427462 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" event={"ID":"5b35862a-cfe3-4de1-963a-211aa93379ce","Type":"ContainerStarted","Data":"a1cc8bf320fdfe26e30df47479e73c2b06b690968b1bdc1e2f6a671fbb6c90c1"} Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.428702 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.437704 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.452817 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" podStartSLOduration=1.452758833 podStartE2EDuration="1.452758833s" podCreationTimestamp="2026-01-30 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:32.451866971 +0000 UTC m=+313.184206190" watchObservedRunningTime="2026-01-30 00:15:32.452758833 +0000 UTC m=+313.185098052" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.468185 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.476203 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-647ff7f58-ftng5"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.481007 5104 scope.go:117] "RemoveContainer" containerID="92c98f6be3d7db63f211809375e9453461fc8dc672a6bd8cb3705086ae0c69f9" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.482899 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.487518 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cf8df6c94-qhgxh"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.533558 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fa7b64-44c0-44b2-be6b-a0e31861888b" path="/var/lib/kubelet/pods/96fa7b64-44c0-44b2-be6b-a0e31861888b/volumes" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.534204 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fa15a9-393b-425f-8093-7cd53c9cb15e" path="/var/lib/kubelet/pods/d2fa15a9-393b-425f-8093-7cd53c9cb15e/volumes" Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.657649 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:15:32 crc kubenswrapper[5104]: I0130 00:15:32.819775 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:15:33 crc kubenswrapper[5104]: I0130 00:15:33.449086 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" event={"ID":"dbfc6a79-6bea-4686-8608-068065c6d30a","Type":"ContainerStarted","Data":"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b"} Jan 30 00:15:33 crc kubenswrapper[5104]: I0130 00:15:33.449230 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" event={"ID":"dbfc6a79-6bea-4686-8608-068065c6d30a","Type":"ContainerStarted","Data":"e422d7c6d532d8b8da021edde11ba9a4b2a661281e22be94a648aba489b1ef59"} Jan 30 00:15:33 crc kubenswrapper[5104]: I0130 00:15:33.449487 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:15:33 crc kubenswrapper[5104]: I0130 00:15:33.484297 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" podStartSLOduration=2.484270504 podStartE2EDuration="2.484270504s" podCreationTimestamp="2026-01-30 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:15:33.472161863 +0000 UTC m=+314.204501112" watchObservedRunningTime="2026-01-30 00:15:33.484270504 +0000 UTC m=+314.216609753" Jan 30 00:15:33 crc kubenswrapper[5104]: I0130 00:15:33.545267 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:16:11 crc kubenswrapper[5104]: I0130 00:16:11.673628 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:16:11 crc kubenswrapper[5104]: I0130 00:16:11.674549 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" podUID="dbfc6a79-6bea-4686-8608-068065c6d30a" containerName="controller-manager" containerID="cri-o://2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b" gracePeriod=30 Jan 30 00:16:11 crc kubenswrapper[5104]: I0130 00:16:11.685552 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:16:11 crc kubenswrapper[5104]: I0130 00:16:11.685799 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" podUID="5b35862a-cfe3-4de1-963a-211aa93379ce" containerName="route-controller-manager" containerID="cri-o://a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80" gracePeriod=30 Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.093147 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.098146 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.129376 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7b798567-fdf27"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130106 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b35862a-cfe3-4de1-963a-211aa93379ce" containerName="route-controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130144 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b35862a-cfe3-4de1-963a-211aa93379ce" containerName="route-controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130202 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dbfc6a79-6bea-4686-8608-068065c6d30a" containerName="controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130211 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfc6a79-6bea-4686-8608-068065c6d30a" containerName="controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130337 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b35862a-cfe3-4de1-963a-211aa93379ce" containerName="route-controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.130356 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="dbfc6a79-6bea-4686-8608-068065c6d30a" containerName="controller-manager" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.139330 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.158174 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7b798567-fdf27"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.164088 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.171175 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.171439 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230018 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230073 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khxwp\" (UniqueName: \"kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp\") pod \"5b35862a-cfe3-4de1-963a-211aa93379ce\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230097 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config\") pod \"5b35862a-cfe3-4de1-963a-211aa93379ce\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230131 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca\") pod \"5b35862a-cfe3-4de1-963a-211aa93379ce\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230157 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvw5b\" (UniqueName: \"kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230177 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230200 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230239 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert\") pod \"5b35862a-cfe3-4de1-963a-211aa93379ce\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230252 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230274 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp\") pod \"dbfc6a79-6bea-4686-8608-068065c6d30a\" (UID: \"dbfc6a79-6bea-4686-8608-068065c6d30a\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230290 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp\") pod \"5b35862a-cfe3-4de1-963a-211aa93379ce\" (UID: \"5b35862a-cfe3-4de1-963a-211aa93379ce\") " Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230434 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-proxy-ca-bundles\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230463 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8023952f-87bc-4ab0-b769-be1b5cc1a91e-tmp\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230503 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-client-ca\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230521 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8023952f-87bc-4ab0-b769-be1b5cc1a91e-serving-cert\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230555 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-config\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.230585 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glr26\" (UniqueName: \"kubernetes.io/projected/8023952f-87bc-4ab0-b769-be1b5cc1a91e-kube-api-access-glr26\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.231209 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp" (OuterVolumeSpecName: "tmp") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.231238 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp" (OuterVolumeSpecName: "tmp") pod "5b35862a-cfe3-4de1-963a-211aa93379ce" (UID: "5b35862a-cfe3-4de1-963a-211aa93379ce"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.231348 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.231689 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b35862a-cfe3-4de1-963a-211aa93379ce" (UID: "5b35862a-cfe3-4de1-963a-211aa93379ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.231798 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config" (OuterVolumeSpecName: "config") pod "5b35862a-cfe3-4de1-963a-211aa93379ce" (UID: "5b35862a-cfe3-4de1-963a-211aa93379ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.232025 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config" (OuterVolumeSpecName: "config") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.232289 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca" (OuterVolumeSpecName: "client-ca") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.236012 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.238971 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b35862a-cfe3-4de1-963a-211aa93379ce" (UID: "5b35862a-cfe3-4de1-963a-211aa93379ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.239632 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp" (OuterVolumeSpecName: "kube-api-access-khxwp") pod "5b35862a-cfe3-4de1-963a-211aa93379ce" (UID: "5b35862a-cfe3-4de1-963a-211aa93379ce"). InnerVolumeSpecName "kube-api-access-khxwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.239926 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b" (OuterVolumeSpecName: "kube-api-access-hvw5b") pod "dbfc6a79-6bea-4686-8608-068065c6d30a" (UID: "dbfc6a79-6bea-4686-8608-068065c6d30a"). InnerVolumeSpecName "kube-api-access-hvw5b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331703 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f53115c-0a1c-475a-9440-55030dfb2438-serving-cert\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331747 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-client-ca\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331765 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8023952f-87bc-4ab0-b769-be1b5cc1a91e-serving-cert\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331794 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-client-ca\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331816 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-config\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331837 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-config\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.331873 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg97f\" (UniqueName: \"kubernetes.io/projected/7f53115c-0a1c-475a-9440-55030dfb2438-kube-api-access-rg97f\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.332312 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glr26\" (UniqueName: \"kubernetes.io/projected/8023952f-87bc-4ab0-b769-be1b5cc1a91e-kube-api-access-glr26\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.332455 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-proxy-ca-bundles\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.332673 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f53115c-0a1c-475a-9440-55030dfb2438-tmp\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.332827 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8023952f-87bc-4ab0-b769-be1b5cc1a91e-tmp\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.332997 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hvw5b\" (UniqueName: \"kubernetes.io/projected/dbfc6a79-6bea-4686-8608-068065c6d30a-kube-api-access-hvw5b\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333072 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-config\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333076 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333113 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbfc6a79-6bea-4686-8608-068065c6d30a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333123 5104 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35862a-cfe3-4de1-963a-211aa93379ce-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333131 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333139 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dbfc6a79-6bea-4686-8608-068065c6d30a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333146 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b35862a-cfe3-4de1-963a-211aa93379ce-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333155 5104 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbfc6a79-6bea-4686-8608-068065c6d30a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333164 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-khxwp\" (UniqueName: \"kubernetes.io/projected/5b35862a-cfe3-4de1-963a-211aa93379ce-kube-api-access-khxwp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333172 5104 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333179 5104 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b35862a-cfe3-4de1-963a-211aa93379ce-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333388 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8023952f-87bc-4ab0-b769-be1b5cc1a91e-tmp\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333494 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-proxy-ca-bundles\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.333587 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8023952f-87bc-4ab0-b769-be1b5cc1a91e-client-ca\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.335870 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8023952f-87bc-4ab0-b769-be1b5cc1a91e-serving-cert\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.348419 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glr26\" (UniqueName: \"kubernetes.io/projected/8023952f-87bc-4ab0-b769-be1b5cc1a91e-kube-api-access-glr26\") pod \"controller-manager-d7b798567-fdf27\" (UID: \"8023952f-87bc-4ab0-b769-be1b5cc1a91e\") " pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434161 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f53115c-0a1c-475a-9440-55030dfb2438-serving-cert\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434368 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-client-ca\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434392 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-config\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434407 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rg97f\" (UniqueName: \"kubernetes.io/projected/7f53115c-0a1c-475a-9440-55030dfb2438-kube-api-access-rg97f\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434464 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f53115c-0a1c-475a-9440-55030dfb2438-tmp\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.434907 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f53115c-0a1c-475a-9440-55030dfb2438-tmp\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.436548 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-config\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.436732 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f53115c-0a1c-475a-9440-55030dfb2438-client-ca\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.438979 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f53115c-0a1c-475a-9440-55030dfb2438-serving-cert\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.460981 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.461529 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg97f\" (UniqueName: \"kubernetes.io/projected/7f53115c-0a1c-475a-9440-55030dfb2438-kube-api-access-rg97f\") pod \"route-controller-manager-65b49bf6f5-kgbbl\" (UID: \"7f53115c-0a1c-475a-9440-55030dfb2438\") " pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.491751 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.716465 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.733882 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" event={"ID":"7f53115c-0a1c-475a-9440-55030dfb2438","Type":"ContainerStarted","Data":"d3dc1dd6cbc3cacc2cb9cade2122a45265bd58ba98843616e6abf806742fa7e1"} Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.735175 5104 generic.go:358] "Generic (PLEG): container finished" podID="dbfc6a79-6bea-4686-8608-068065c6d30a" containerID="2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b" exitCode=0 Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.735266 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.735356 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" event={"ID":"dbfc6a79-6bea-4686-8608-068065c6d30a","Type":"ContainerDied","Data":"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b"} Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.735434 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr" event={"ID":"dbfc6a79-6bea-4686-8608-068065c6d30a","Type":"ContainerDied","Data":"e422d7c6d532d8b8da021edde11ba9a4b2a661281e22be94a648aba489b1ef59"} Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.735501 5104 scope.go:117] "RemoveContainer" containerID="2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.739934 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.740010 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" event={"ID":"5b35862a-cfe3-4de1-963a-211aa93379ce","Type":"ContainerDied","Data":"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80"} Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.739928 5104 generic.go:358] "Generic (PLEG): container finished" podID="5b35862a-cfe3-4de1-963a-211aa93379ce" containerID="a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80" exitCode=0 Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.740204 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9" event={"ID":"5b35862a-cfe3-4de1-963a-211aa93379ce","Type":"ContainerDied","Data":"a1cc8bf320fdfe26e30df47479e73c2b06b690968b1bdc1e2f6a671fbb6c90c1"} Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.759413 5104 scope.go:117] "RemoveContainer" containerID="2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.759541 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:16:12 crc kubenswrapper[5104]: E0130 00:16:12.759766 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b\": container with ID starting with 2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b not found: ID does not exist" containerID="2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.759801 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b"} err="failed to get container status \"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b\": rpc error: code = NotFound desc = could not find container \"2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b\": container with ID starting with 2b2357f865001bdcd00af7ad189e5e8a6dc6fd88b7ae4508e1a088bc36bff38b not found: ID does not exist" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.759822 5104 scope.go:117] "RemoveContainer" containerID="a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.764924 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c8c4f8d-d6mtr"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.768652 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.774918 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc6fb6d58-xnrn9"] Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.779044 5104 scope.go:117] "RemoveContainer" containerID="a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80" Jan 30 00:16:12 crc kubenswrapper[5104]: E0130 00:16:12.779439 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80\": container with ID starting with a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80 not found: ID does not exist" containerID="a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.779479 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80"} err="failed to get container status \"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80\": rpc error: code = NotFound desc = could not find container \"a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80\": container with ID starting with a2cc93d2ea6eeabe07b6d796193e9ad795d2613e843c72726bc42e7183036b80 not found: ID does not exist" Jan 30 00:16:12 crc kubenswrapper[5104]: I0130 00:16:12.876623 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7b798567-fdf27"] Jan 30 00:16:12 crc kubenswrapper[5104]: W0130 00:16:12.890415 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8023952f_87bc_4ab0_b769_be1b5cc1a91e.slice/crio-3bd0412c584c03e6ac02e09a77e0f2eca1568e2bbad0b2a87045a4048bff05a5 WatchSource:0}: Error finding container 3bd0412c584c03e6ac02e09a77e0f2eca1568e2bbad0b2a87045a4048bff05a5: Status 404 returned error can't find the container with id 3bd0412c584c03e6ac02e09a77e0f2eca1568e2bbad0b2a87045a4048bff05a5 Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.745630 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" event={"ID":"7f53115c-0a1c-475a-9440-55030dfb2438","Type":"ContainerStarted","Data":"3b48e3007df97e2337ed65bc64aa8053711c2ae561a95e78367a68806f0e382d"} Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.746357 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.747738 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" event={"ID":"8023952f-87bc-4ab0-b769-be1b5cc1a91e","Type":"ContainerStarted","Data":"341a939e4bcf7771d2d680141bba6327c79d3cb8a728da17f429d042a08c9130"} Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.747893 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" event={"ID":"8023952f-87bc-4ab0-b769-be1b5cc1a91e","Type":"ContainerStarted","Data":"3bd0412c584c03e6ac02e09a77e0f2eca1568e2bbad0b2a87045a4048bff05a5"} Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.747977 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.753500 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.784170 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b49bf6f5-kgbbl" podStartSLOduration=2.784154527 podStartE2EDuration="2.784154527s" podCreationTimestamp="2026-01-30 00:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:13.768816941 +0000 UTC m=+354.501156160" watchObservedRunningTime="2026-01-30 00:16:13.784154527 +0000 UTC m=+354.516493746" Jan 30 00:16:13 crc kubenswrapper[5104]: I0130 00:16:13.803666 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" podStartSLOduration=2.803647335 podStartE2EDuration="2.803647335s" podCreationTimestamp="2026-01-30 00:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:13.801719452 +0000 UTC m=+354.534058681" watchObservedRunningTime="2026-01-30 00:16:13.803647335 +0000 UTC m=+354.535986554" Jan 30 00:16:14 crc kubenswrapper[5104]: I0130 00:16:14.002686 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7b798567-fdf27" Jan 30 00:16:14 crc kubenswrapper[5104]: I0130 00:16:14.533008 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b35862a-cfe3-4de1-963a-211aa93379ce" path="/var/lib/kubelet/pods/5b35862a-cfe3-4de1-963a-211aa93379ce/volumes" Jan 30 00:16:14 crc kubenswrapper[5104]: I0130 00:16:14.533767 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfc6a79-6bea-4686-8608-068065c6d30a" path="/var/lib/kubelet/pods/dbfc6a79-6bea-4686-8608-068065c6d30a/volumes" Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.733620 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.734424 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6xks" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="registry-server" containerID="cri-o://fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5" gracePeriod=30 Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.755057 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.755486 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kzfbd" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="registry-server" containerID="cri-o://2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a" gracePeriod=30 Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.764231 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.764619 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" containerID="cri-o://9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a" gracePeriod=30 Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.773453 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.773919 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whc9q" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="registry-server" containerID="cri-o://387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b" gracePeriod=30 Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.785884 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.786141 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f55x5" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="registry-server" containerID="cri-o://5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474" gracePeriod=30 Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.792546 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-n5spl"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.802531 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.813242 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-n5spl"] Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.911884 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.912071 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtdk\" (UniqueName: \"kubernetes.io/projected/f14474a2-e628-439c-8bbb-981e1a035991-kube-api-access-qvtdk\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.912324 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:17 crc kubenswrapper[5104]: I0130 00:16:17.912383 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f14474a2-e628-439c-8bbb-981e1a035991-tmp\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.013147 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f14474a2-e628-439c-8bbb-981e1a035991-tmp\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.013217 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.013246 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtdk\" (UniqueName: \"kubernetes.io/projected/f14474a2-e628-439c-8bbb-981e1a035991-kube-api-access-qvtdk\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.013322 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.014457 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f14474a2-e628-439c-8bbb-981e1a035991-tmp\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.014756 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.019699 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f14474a2-e628-439c-8bbb-981e1a035991-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.035416 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtdk\" (UniqueName: \"kubernetes.io/projected/f14474a2-e628-439c-8bbb-981e1a035991-kube-api-access-qvtdk\") pod \"marketplace-operator-547dbd544d-n5spl\" (UID: \"f14474a2-e628-439c-8bbb-981e1a035991\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.181214 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.188708 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.191020 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.196933 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.255524 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.256820 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323414 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s44xq\" (UniqueName: \"kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq\") pod \"103981ae-943d-41ab-a2d1-9cafe7669187\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323490 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content\") pod \"103981ae-943d-41ab-a2d1-9cafe7669187\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323509 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvff\" (UniqueName: \"kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff\") pod \"ed75038d-3a8a-493b-8fda-d9722d334034\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323536 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content\") pod \"ed75038d-3a8a-493b-8fda-d9722d334034\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323587 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx79z\" (UniqueName: \"kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z\") pod \"b5f128e0-a6da-409d-9937-dc7f8b000da0\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323620 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp\") pod \"b5f128e0-a6da-409d-9937-dc7f8b000da0\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323655 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities\") pod \"ed75038d-3a8a-493b-8fda-d9722d334034\" (UID: \"ed75038d-3a8a-493b-8fda-d9722d334034\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323693 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities\") pod \"103981ae-943d-41ab-a2d1-9cafe7669187\" (UID: \"103981ae-943d-41ab-a2d1-9cafe7669187\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323769 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca\") pod \"b5f128e0-a6da-409d-9937-dc7f8b000da0\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.323810 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics\") pod \"b5f128e0-a6da-409d-9937-dc7f8b000da0\" (UID: \"b5f128e0-a6da-409d-9937-dc7f8b000da0\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.324780 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities" (OuterVolumeSpecName: "utilities") pod "103981ae-943d-41ab-a2d1-9cafe7669187" (UID: "103981ae-943d-41ab-a2d1-9cafe7669187"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.325195 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities" (OuterVolumeSpecName: "utilities") pod "ed75038d-3a8a-493b-8fda-d9722d334034" (UID: "ed75038d-3a8a-493b-8fda-d9722d334034"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.326154 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b5f128e0-a6da-409d-9937-dc7f8b000da0" (UID: "b5f128e0-a6da-409d-9937-dc7f8b000da0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.328404 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b5f128e0-a6da-409d-9937-dc7f8b000da0" (UID: "b5f128e0-a6da-409d-9937-dc7f8b000da0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.329086 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq" (OuterVolumeSpecName: "kube-api-access-s44xq") pod "103981ae-943d-41ab-a2d1-9cafe7669187" (UID: "103981ae-943d-41ab-a2d1-9cafe7669187"). InnerVolumeSpecName "kube-api-access-s44xq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.329082 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp" (OuterVolumeSpecName: "tmp") pod "b5f128e0-a6da-409d-9937-dc7f8b000da0" (UID: "b5f128e0-a6da-409d-9937-dc7f8b000da0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.331628 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z" (OuterVolumeSpecName: "kube-api-access-cx79z") pod "b5f128e0-a6da-409d-9937-dc7f8b000da0" (UID: "b5f128e0-a6da-409d-9937-dc7f8b000da0"). InnerVolumeSpecName "kube-api-access-cx79z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.337830 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed75038d-3a8a-493b-8fda-d9722d334034" (UID: "ed75038d-3a8a-493b-8fda-d9722d334034"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.346033 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff" (OuterVolumeSpecName: "kube-api-access-ckvff") pod "ed75038d-3a8a-493b-8fda-d9722d334034" (UID: "ed75038d-3a8a-493b-8fda-d9722d334034"). InnerVolumeSpecName "kube-api-access-ckvff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.364762 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "103981ae-943d-41ab-a2d1-9cafe7669187" (UID: "103981ae-943d-41ab-a2d1-9cafe7669187"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.418253 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-n5spl"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425009 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content\") pod \"74720252-7847-489b-a755-3c27d70770f9\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425136 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities\") pod \"74720252-7847-489b-a755-3c27d70770f9\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425174 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wksv\" (UniqueName: \"kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv\") pod \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425224 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mn29\" (UniqueName: \"kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29\") pod \"74720252-7847-489b-a755-3c27d70770f9\" (UID: \"74720252-7847-489b-a755-3c27d70770f9\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425289 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content\") pod \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425396 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities\") pod \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\" (UID: \"9d42c1eb-8eda-4e38-a26c-970e32c818bb\") " Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425636 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s44xq\" (UniqueName: \"kubernetes.io/projected/103981ae-943d-41ab-a2d1-9cafe7669187-kube-api-access-s44xq\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425659 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425672 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ckvff\" (UniqueName: \"kubernetes.io/projected/ed75038d-3a8a-493b-8fda-d9722d334034-kube-api-access-ckvff\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425685 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425697 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cx79z\" (UniqueName: \"kubernetes.io/projected/b5f128e0-a6da-409d-9937-dc7f8b000da0-kube-api-access-cx79z\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425710 5104 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5f128e0-a6da-409d-9937-dc7f8b000da0-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425725 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed75038d-3a8a-493b-8fda-d9722d334034-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425736 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103981ae-943d-41ab-a2d1-9cafe7669187-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425748 5104 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.425759 5104 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5f128e0-a6da-409d-9937-dc7f8b000da0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.426791 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities" (OuterVolumeSpecName: "utilities") pod "74720252-7847-489b-a755-3c27d70770f9" (UID: "74720252-7847-489b-a755-3c27d70770f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.427408 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities" (OuterVolumeSpecName: "utilities") pod "9d42c1eb-8eda-4e38-a26c-970e32c818bb" (UID: "9d42c1eb-8eda-4e38-a26c-970e32c818bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.429918 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv" (OuterVolumeSpecName: "kube-api-access-4wksv") pod "9d42c1eb-8eda-4e38-a26c-970e32c818bb" (UID: "9d42c1eb-8eda-4e38-a26c-970e32c818bb"). InnerVolumeSpecName "kube-api-access-4wksv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.429969 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29" (OuterVolumeSpecName: "kube-api-access-6mn29") pod "74720252-7847-489b-a755-3c27d70770f9" (UID: "74720252-7847-489b-a755-3c27d70770f9"). InnerVolumeSpecName "kube-api-access-6mn29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.481122 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74720252-7847-489b-a755-3c27d70770f9" (UID: "74720252-7847-489b-a755-3c27d70770f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.526658 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.527142 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.527156 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74720252-7847-489b-a755-3c27d70770f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.527165 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wksv\" (UniqueName: \"kubernetes.io/projected/9d42c1eb-8eda-4e38-a26c-970e32c818bb-kube-api-access-4wksv\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.527175 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mn29\" (UniqueName: \"kubernetes.io/projected/74720252-7847-489b-a755-3c27d70770f9-kube-api-access-6mn29\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.569792 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d42c1eb-8eda-4e38-a26c-970e32c818bb" (UID: "9d42c1eb-8eda-4e38-a26c-970e32c818bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.629429 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d42c1eb-8eda-4e38-a26c-970e32c818bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.783259 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" event={"ID":"f14474a2-e628-439c-8bbb-981e1a035991","Type":"ContainerStarted","Data":"38bf2be6d3ab37cdbce2cdcd69a5346d631dcf162574b83e38451e17096a1073"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.783311 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" event={"ID":"f14474a2-e628-439c-8bbb-981e1a035991","Type":"ContainerStarted","Data":"b880d2e54bac9f9a8d2a9c1b264e4dc300cad8f14019d8bda2d84d5798bfda92"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.783807 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.785365 5104 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-n5spl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.785441 5104 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" podUID="f14474a2-e628-439c-8bbb-981e1a035991" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.787706 5104 generic.go:358] "Generic (PLEG): container finished" podID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerID="5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474" exitCode=0 Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.787972 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f55x5" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.788469 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerDied","Data":"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.788517 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f55x5" event={"ID":"9d42c1eb-8eda-4e38-a26c-970e32c818bb","Type":"ContainerDied","Data":"c9e23068cd296270d2da6b3fa0c8f2cc09dff0799bd5f17a28d0005f327d5de3"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.788545 5104 scope.go:117] "RemoveContainer" containerID="5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.792416 5104 generic.go:358] "Generic (PLEG): container finished" podID="74720252-7847-489b-a755-3c27d70770f9" containerID="2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a" exitCode=0 Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.792526 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerDied","Data":"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.792559 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzfbd" event={"ID":"74720252-7847-489b-a755-3c27d70770f9","Type":"ContainerDied","Data":"f161f79c2a616da69b79c68306f8271956c9d2d340268ef51b9e4e952d74b0dc"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.792720 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzfbd" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.801449 5104 generic.go:358] "Generic (PLEG): container finished" podID="103981ae-943d-41ab-a2d1-9cafe7669187" containerID="fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5" exitCode=0 Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.801934 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerDied","Data":"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.801980 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6xks" event={"ID":"103981ae-943d-41ab-a2d1-9cafe7669187","Type":"ContainerDied","Data":"3e5429138ed3b241e33c7b3d345094fd08b25a7a2e21537d0a0f7d6cbb1bab04"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.802083 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6xks" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.805556 5104 generic.go:358] "Generic (PLEG): container finished" podID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerID="9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a" exitCode=0 Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.805636 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerDied","Data":"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.805659 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" event={"ID":"b5f128e0-a6da-409d-9937-dc7f8b000da0","Type":"ContainerDied","Data":"31d071cc6a2990d8b046a07f31b22bcd7079e2f4f5ec39eb510d96cdfa48ff6f"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.805765 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mb4lh" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.809145 5104 scope.go:117] "RemoveContainer" containerID="5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.809238 5104 generic.go:358] "Generic (PLEG): container finished" podID="ed75038d-3a8a-493b-8fda-d9722d334034" containerID="387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b" exitCode=0 Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.809311 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whc9q" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.809319 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerDied","Data":"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.812018 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whc9q" event={"ID":"ed75038d-3a8a-493b-8fda-d9722d334034","Type":"ContainerDied","Data":"c9e6f453e603b7995c75c51722cf529d20219169dafe11c18afa19332a303641"} Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.831624 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" podStartSLOduration=1.831471038 podStartE2EDuration="1.831471038s" podCreationTimestamp="2026-01-30 00:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:18.811899008 +0000 UTC m=+359.544238267" watchObservedRunningTime="2026-01-30 00:16:18.831471038 +0000 UTC m=+359.563810257" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.833533 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.848976 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mb4lh"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.851360 5104 scope.go:117] "RemoveContainer" containerID="ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.853841 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.858543 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kzfbd"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.862254 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.867963 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6xks"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.872610 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.877199 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whc9q"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.879971 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.882778 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f55x5"] Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.891280 5104 scope.go:117] "RemoveContainer" containerID="5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.891701 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474\": container with ID starting with 5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474 not found: ID does not exist" containerID="5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.891723 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474"} err="failed to get container status \"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474\": rpc error: code = NotFound desc = could not find container \"5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474\": container with ID starting with 5965ad0cf0097ef931a7347c4d50c3c21ae48620fd5630a10ea26ca2b9aa8474 not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.891740 5104 scope.go:117] "RemoveContainer" containerID="5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.892070 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978\": container with ID starting with 5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978 not found: ID does not exist" containerID="5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.892083 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978"} err="failed to get container status \"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978\": rpc error: code = NotFound desc = could not find container \"5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978\": container with ID starting with 5c147ccf8753dd935ae5ede5a71d4e03e4964d69a1ad1c882aa4988d33276978 not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.892094 5104 scope.go:117] "RemoveContainer" containerID="ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.892240 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e\": container with ID starting with ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e not found: ID does not exist" containerID="ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.892253 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e"} err="failed to get container status \"ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e\": rpc error: code = NotFound desc = could not find container \"ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e\": container with ID starting with ccee0083d880793402fc2d4b5bcc55e2843c85af9029e30ad87c01f3d0cd3e9e not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.892263 5104 scope.go:117] "RemoveContainer" containerID="2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.906441 5104 scope.go:117] "RemoveContainer" containerID="7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.929466 5104 scope.go:117] "RemoveContainer" containerID="f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.948728 5104 scope.go:117] "RemoveContainer" containerID="2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.949163 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a\": container with ID starting with 2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a not found: ID does not exist" containerID="2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.949302 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a"} err="failed to get container status \"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a\": rpc error: code = NotFound desc = could not find container \"2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a\": container with ID starting with 2ad59b4028069a6e44ad6ecdcbb11f91ed66f86ccf034ac19041ca9da07a008a not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.949389 5104 scope.go:117] "RemoveContainer" containerID="7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.950511 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9\": container with ID starting with 7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9 not found: ID does not exist" containerID="7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.950565 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9"} err="failed to get container status \"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9\": rpc error: code = NotFound desc = could not find container \"7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9\": container with ID starting with 7e27c3568ca3c540ebbb04d81dc266d87456856f101216fdf4b8cdb48e005ca9 not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.950594 5104 scope.go:117] "RemoveContainer" containerID="f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21" Jan 30 00:16:18 crc kubenswrapper[5104]: E0130 00:16:18.951095 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21\": container with ID starting with f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21 not found: ID does not exist" containerID="f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.951175 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21"} err="failed to get container status \"f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21\": rpc error: code = NotFound desc = could not find container \"f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21\": container with ID starting with f4e3aeefd3ee13cc24315a90b463f3dd5b37e407ca51a5b7dd6a3e0e2bcd9b21 not found: ID does not exist" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.951247 5104 scope.go:117] "RemoveContainer" containerID="fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.964603 5104 scope.go:117] "RemoveContainer" containerID="d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e" Jan 30 00:16:18 crc kubenswrapper[5104]: I0130 00:16:18.981785 5104 scope.go:117] "RemoveContainer" containerID="c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.013416 5104 scope.go:117] "RemoveContainer" containerID="fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.014043 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5\": container with ID starting with fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5 not found: ID does not exist" containerID="fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014103 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5"} err="failed to get container status \"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5\": rpc error: code = NotFound desc = could not find container \"fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5\": container with ID starting with fbc4871e886fffe25e40556cd82bd1c53ece447e07f72d5836ea4f28c1b2a9c5 not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014137 5104 scope.go:117] "RemoveContainer" containerID="d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.014494 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e\": container with ID starting with d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e not found: ID does not exist" containerID="d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014532 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e"} err="failed to get container status \"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e\": rpc error: code = NotFound desc = could not find container \"d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e\": container with ID starting with d101219cb437e85fe91d77e1cb245802a5cd12d25b9e806b571029c56d35016e not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014557 5104 scope.go:117] "RemoveContainer" containerID="c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.014898 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639\": container with ID starting with c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639 not found: ID does not exist" containerID="c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014948 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639"} err="failed to get container status \"c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639\": rpc error: code = NotFound desc = could not find container \"c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639\": container with ID starting with c0d91a7e4158ce595bd843c6a3fddd19ff0af4a771b60f4965a55add1741a639 not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.014977 5104 scope.go:117] "RemoveContainer" containerID="9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.027439 5104 scope.go:117] "RemoveContainer" containerID="e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.044035 5104 scope.go:117] "RemoveContainer" containerID="9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.045020 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a\": container with ID starting with 9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a not found: ID does not exist" containerID="9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.045049 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a"} err="failed to get container status \"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a\": rpc error: code = NotFound desc = could not find container \"9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a\": container with ID starting with 9a99b79562b543b8478c0c9793192f0c534ba468fdaaea406e68dfc73717569a not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.045069 5104 scope.go:117] "RemoveContainer" containerID="e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.045254 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0\": container with ID starting with e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0 not found: ID does not exist" containerID="e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.045275 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0"} err="failed to get container status \"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0\": rpc error: code = NotFound desc = could not find container \"e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0\": container with ID starting with e30d3aeeceacab27a9a72c6a0d28ae371c5d759542a61ab5492afddc30ea0ae0 not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.045287 5104 scope.go:117] "RemoveContainer" containerID="387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.063181 5104 scope.go:117] "RemoveContainer" containerID="6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.076656 5104 scope.go:117] "RemoveContainer" containerID="1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.095762 5104 scope.go:117] "RemoveContainer" containerID="387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.096513 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b\": container with ID starting with 387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b not found: ID does not exist" containerID="387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.096551 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b"} err="failed to get container status \"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b\": rpc error: code = NotFound desc = could not find container \"387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b\": container with ID starting with 387ac68e209a2254eee0bbd3d6e37216dd41d4f89549150df78bf8d81c89993b not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.096605 5104 scope.go:117] "RemoveContainer" containerID="6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.097233 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5\": container with ID starting with 6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5 not found: ID does not exist" containerID="6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.097252 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5"} err="failed to get container status \"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5\": rpc error: code = NotFound desc = could not find container \"6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5\": container with ID starting with 6b8d86914e04055be196960c6930e6523debce1b37cb048cefeab597c937e4f5 not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.097268 5104 scope.go:117] "RemoveContainer" containerID="1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c" Jan 30 00:16:19 crc kubenswrapper[5104]: E0130 00:16:19.097678 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c\": container with ID starting with 1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c not found: ID does not exist" containerID="1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.097743 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c"} err="failed to get container status \"1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c\": rpc error: code = NotFound desc = could not find container \"1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c\": container with ID starting with 1fbf7106f1007e27af68a503b5bf181443073dbba570fc868e54b0dabbe1307c not found: ID does not exist" Jan 30 00:16:19 crc kubenswrapper[5104]: I0130 00:16:19.836995 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-n5spl" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.532842 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" path="/var/lib/kubelet/pods/103981ae-943d-41ab-a2d1-9cafe7669187/volumes" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.533666 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74720252-7847-489b-a755-3c27d70770f9" path="/var/lib/kubelet/pods/74720252-7847-489b-a755-3c27d70770f9/volumes" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.534362 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" path="/var/lib/kubelet/pods/9d42c1eb-8eda-4e38-a26c-970e32c818bb/volumes" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.535637 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" path="/var/lib/kubelet/pods/b5f128e0-a6da-409d-9937-dc7f8b000da0/volumes" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.536106 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" path="/var/lib/kubelet/pods/ed75038d-3a8a-493b-8fda-d9722d334034/volumes" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878170 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdtm9"] Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878923 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878942 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878954 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878961 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878971 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878979 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878992 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.878999 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879008 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879016 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879032 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879038 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879051 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879058 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879069 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879077 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879091 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879099 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879109 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879117 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879125 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879132 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879144 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879152 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="extract-content" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879166 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879173 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="extract-utilities" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879281 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d42c1eb-8eda-4e38-a26c-970e32c818bb" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879295 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed75038d-3a8a-493b-8fda-d9722d334034" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879307 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879316 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="74720252-7847-489b-a755-3c27d70770f9" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879327 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="103981ae-943d-41ab-a2d1-9cafe7669187" containerName="registry-server" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879338 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879424 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.879433 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f128e0-a6da-409d-9937-dc7f8b000da0" containerName="marketplace-operator" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.883293 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdtm9"] Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.883410 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.886453 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.959724 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-catalog-content\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.959898 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-utilities\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:20 crc kubenswrapper[5104]: I0130 00:16:20.960098 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glf4v\" (UniqueName: \"kubernetes.io/projected/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-kube-api-access-glf4v\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.061992 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-utilities\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.062090 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glf4v\" (UniqueName: \"kubernetes.io/projected/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-kube-api-access-glf4v\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.062144 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-catalog-content\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.062449 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-utilities\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.065271 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-catalog-content\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.077582 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rpkrd"] Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.082834 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glf4v\" (UniqueName: \"kubernetes.io/projected/8c42f94f-6ae8-49c5-ba21-54fd74e3329f-kube-api-access-glf4v\") pod \"community-operators-zdtm9\" (UID: \"8c42f94f-6ae8-49c5-ba21-54fd74e3329f\") " pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.097758 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.097971 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rpkrd"] Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.100335 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.163688 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-catalog-content\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.163747 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-utilities\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.164109 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqpb\" (UniqueName: \"kubernetes.io/projected/a6285221-9433-44df-8c25-e804e3faddd1-kube-api-access-knqpb\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.200637 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.265524 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knqpb\" (UniqueName: \"kubernetes.io/projected/a6285221-9433-44df-8c25-e804e3faddd1-kube-api-access-knqpb\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.267272 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-catalog-content\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.268131 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-utilities\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.269077 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-catalog-content\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.269635 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6285221-9433-44df-8c25-e804e3faddd1-utilities\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.286831 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqpb\" (UniqueName: \"kubernetes.io/projected/a6285221-9433-44df-8c25-e804e3faddd1-kube-api-access-knqpb\") pod \"certified-operators-rpkrd\" (UID: \"a6285221-9433-44df-8c25-e804e3faddd1\") " pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.431630 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.594252 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdtm9"] Jan 30 00:16:21 crc kubenswrapper[5104]: W0130 00:16:21.599089 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c42f94f_6ae8_49c5_ba21_54fd74e3329f.slice/crio-c2502a745e3122469eb4d2641500a4c3034c9a0821b16b722959ede6909c5f8d WatchSource:0}: Error finding container c2502a745e3122469eb4d2641500a4c3034c9a0821b16b722959ede6909c5f8d: Status 404 returned error can't find the container with id c2502a745e3122469eb4d2641500a4c3034c9a0821b16b722959ede6909c5f8d Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.833899 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rpkrd"] Jan 30 00:16:21 crc kubenswrapper[5104]: W0130 00:16:21.837833 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6285221_9433_44df_8c25_e804e3faddd1.slice/crio-79e60853cd7158a7dbf8c18d68ba24e1119942d9011b2cdafb97d00e1e3fc5e4 WatchSource:0}: Error finding container 79e60853cd7158a7dbf8c18d68ba24e1119942d9011b2cdafb97d00e1e3fc5e4: Status 404 returned error can't find the container with id 79e60853cd7158a7dbf8c18d68ba24e1119942d9011b2cdafb97d00e1e3fc5e4 Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.866059 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpkrd" event={"ID":"a6285221-9433-44df-8c25-e804e3faddd1","Type":"ContainerStarted","Data":"79e60853cd7158a7dbf8c18d68ba24e1119942d9011b2cdafb97d00e1e3fc5e4"} Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.870987 5104 generic.go:358] "Generic (PLEG): container finished" podID="8c42f94f-6ae8-49c5-ba21-54fd74e3329f" containerID="499bef93efa3f8d0a6c350e46ab2b38ff35977ceefd7a838f280904d217357de" exitCode=0 Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.871056 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdtm9" event={"ID":"8c42f94f-6ae8-49c5-ba21-54fd74e3329f","Type":"ContainerDied","Data":"499bef93efa3f8d0a6c350e46ab2b38ff35977ceefd7a838f280904d217357de"} Jan 30 00:16:21 crc kubenswrapper[5104]: I0130 00:16:21.871109 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdtm9" event={"ID":"8c42f94f-6ae8-49c5-ba21-54fd74e3329f","Type":"ContainerStarted","Data":"c2502a745e3122469eb4d2641500a4c3034c9a0821b16b722959ede6909c5f8d"} Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.356918 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vsn4r"] Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.361187 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.383780 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vsn4r"] Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386374 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-kube-api-access-8r7hn\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386425 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386455 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386539 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-certificates\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386597 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-trusted-ca\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386782 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-tls\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386875 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.386966 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.413280 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488198 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488259 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488303 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-certificates\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488330 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-trusted-ca\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488371 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-tls\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488414 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.488470 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-kube-api-access-8r7hn\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.489661 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.490395 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-certificates\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.491480 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-trusted-ca\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.495258 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.495929 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-registry-tls\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.506842 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.506925 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/c5d751c6-7d57-42bc-ba11-43b0a7ad7634-kube-api-access-8r7hn\") pod \"image-registry-5d9d95bf5b-vsn4r\" (UID: \"c5d751c6-7d57-42bc-ba11-43b0a7ad7634\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.676376 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.878397 5104 generic.go:358] "Generic (PLEG): container finished" podID="a6285221-9433-44df-8c25-e804e3faddd1" containerID="995f9bf9532e0bd875cd97a018132aa3d0cf3703a1778ab5e6dae6beef2738d4" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.878519 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpkrd" event={"ID":"a6285221-9433-44df-8c25-e804e3faddd1","Type":"ContainerDied","Data":"995f9bf9532e0bd875cd97a018132aa3d0cf3703a1778ab5e6dae6beef2738d4"} Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.882565 5104 generic.go:358] "Generic (PLEG): container finished" podID="8c42f94f-6ae8-49c5-ba21-54fd74e3329f" containerID="e80ce2b4a6d72d7b29c1607bf61d5b21a9d0461565914239ac754021c55474b3" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5104]: I0130 00:16:22.882710 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdtm9" event={"ID":"8c42f94f-6ae8-49c5-ba21-54fd74e3329f","Type":"ContainerDied","Data":"e80ce2b4a6d72d7b29c1607bf61d5b21a9d0461565914239ac754021c55474b3"} Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.096053 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vsn4r"] Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.274317 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.278708 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.283096 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.302766 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.302899 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.307104 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz4g7\" (UniqueName: \"kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.327025 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.407930 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.407978 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.408086 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pz4g7\" (UniqueName: \"kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.408541 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.408542 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.436373 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz4g7\" (UniqueName: \"kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7\") pod \"redhat-marketplace-gjkcz\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.477301 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7h878"] Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.486284 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.488410 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.490293 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7h878"] Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.509461 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-catalog-content\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.509564 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-utilities\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.509589 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfd2\" (UniqueName: \"kubernetes.io/projected/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-kube-api-access-vbfd2\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.610666 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-utilities\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.610714 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbfd2\" (UniqueName: \"kubernetes.io/projected/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-kube-api-access-vbfd2\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.610979 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-catalog-content\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.611686 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-utilities\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.611825 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-catalog-content\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.630841 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbfd2\" (UniqueName: \"kubernetes.io/projected/734416dc-0d5e-4b50-a117-bc6e9c8f92b9-kube-api-access-vbfd2\") pod \"redhat-operators-7h878\" (UID: \"734416dc-0d5e-4b50-a117-bc6e9c8f92b9\") " pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.634403 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.854165 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.891275 5104 generic.go:358] "Generic (PLEG): container finished" podID="a6285221-9433-44df-8c25-e804e3faddd1" containerID="2bb02cf747bcee5e8b9c570c758b3c86c0c61db1d7da8b9102d2eb8267db9535" exitCode=0 Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.891364 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpkrd" event={"ID":"a6285221-9433-44df-8c25-e804e3faddd1","Type":"ContainerDied","Data":"2bb02cf747bcee5e8b9c570c758b3c86c0c61db1d7da8b9102d2eb8267db9535"} Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.897079 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" event={"ID":"c5d751c6-7d57-42bc-ba11-43b0a7ad7634","Type":"ContainerStarted","Data":"1b3fe74daaaed4f3ddcdb29622b7dab6d9abbc7e24ab44fedf68195f2aae0df5"} Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.897134 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" event={"ID":"c5d751c6-7d57-42bc-ba11-43b0a7ad7634","Type":"ContainerStarted","Data":"67eb1227249f7fe64f4ec93b11d2317e6c0a330dcb5b408769ec2028841ec992"} Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.897280 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.901285 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdtm9" event={"ID":"8c42f94f-6ae8-49c5-ba21-54fd74e3329f","Type":"ContainerStarted","Data":"8e63b8cb795884d8732d8e9286613df6743dcbe32d349e4d8d1881773d951604"} Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.933665 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" podStartSLOduration=1.9336462239999999 podStartE2EDuration="1.933646224s" podCreationTimestamp="2026-01-30 00:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:23.930104328 +0000 UTC m=+364.662443547" watchObservedRunningTime="2026-01-30 00:16:23.933646224 +0000 UTC m=+364.665985463" Jan 30 00:16:23 crc kubenswrapper[5104]: I0130 00:16:23.954016 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdtm9" podStartSLOduration=3.382789376 podStartE2EDuration="3.953997364s" podCreationTimestamp="2026-01-30 00:16:20 +0000 UTC" firstStartedPulling="2026-01-30 00:16:21.871969414 +0000 UTC m=+362.604308633" lastFinishedPulling="2026-01-30 00:16:22.443177402 +0000 UTC m=+363.175516621" observedRunningTime="2026-01-30 00:16:23.950167501 +0000 UTC m=+364.682506720" watchObservedRunningTime="2026-01-30 00:16:23.953997364 +0000 UTC m=+364.686336663" Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.034830 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:16:24 crc kubenswrapper[5104]: W0130 00:16:24.053254 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae1cbfae_486a_406a_a607_6e85a313e208.slice/crio-503536269a595c3bbd6c8a11499d1c6324efd95f5638c84f340fcd363ff6a2cf WatchSource:0}: Error finding container 503536269a595c3bbd6c8a11499d1c6324efd95f5638c84f340fcd363ff6a2cf: Status 404 returned error can't find the container with id 503536269a595c3bbd6c8a11499d1c6324efd95f5638c84f340fcd363ff6a2cf Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.242472 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7h878"] Jan 30 00:16:24 crc kubenswrapper[5104]: E0130 00:16:24.293063 5104 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae1cbfae_486a_406a_a607_6e85a313e208.slice/crio-conmon-93a0cf12b7a4597bd46ab35ae2950102a4d5b3554fd6785dd30b7f50e267a829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae1cbfae_486a_406a_a607_6e85a313e208.slice/crio-93a0cf12b7a4597bd46ab35ae2950102a4d5b3554fd6785dd30b7f50e267a829.scope\": RecentStats: unable to find data in memory cache]" Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.909635 5104 generic.go:358] "Generic (PLEG): container finished" podID="734416dc-0d5e-4b50-a117-bc6e9c8f92b9" containerID="128419cfb279e5efb3b6737f2411bd2a294e6d2eb7d1a3b723c5c8aedd1b0902" exitCode=0 Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.910937 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h878" event={"ID":"734416dc-0d5e-4b50-a117-bc6e9c8f92b9","Type":"ContainerDied","Data":"128419cfb279e5efb3b6737f2411bd2a294e6d2eb7d1a3b723c5c8aedd1b0902"} Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.911126 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h878" event={"ID":"734416dc-0d5e-4b50-a117-bc6e9c8f92b9","Type":"ContainerStarted","Data":"81d966571da4aa5fef18d6969741c3c9f0df5df401d8c393371f94de2d41ebf7"} Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.912059 5104 generic.go:358] "Generic (PLEG): container finished" podID="ae1cbfae-486a-406a-a607-6e85a313e208" containerID="93a0cf12b7a4597bd46ab35ae2950102a4d5b3554fd6785dd30b7f50e267a829" exitCode=0 Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.912358 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerDied","Data":"93a0cf12b7a4597bd46ab35ae2950102a4d5b3554fd6785dd30b7f50e267a829"} Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.912404 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerStarted","Data":"503536269a595c3bbd6c8a11499d1c6324efd95f5638c84f340fcd363ff6a2cf"} Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.919059 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpkrd" event={"ID":"a6285221-9433-44df-8c25-e804e3faddd1","Type":"ContainerStarted","Data":"fbc1dd7ac9ec64d5c9890dfb9cbb03c9906294706afa76783bc5e9a8eaa853b5"} Jan 30 00:16:24 crc kubenswrapper[5104]: I0130 00:16:24.975769 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rpkrd" podStartSLOduration=3.435954556 podStartE2EDuration="3.975744353s" podCreationTimestamp="2026-01-30 00:16:21 +0000 UTC" firstStartedPulling="2026-01-30 00:16:22.879346883 +0000 UTC m=+363.611686112" lastFinishedPulling="2026-01-30 00:16:23.41913669 +0000 UTC m=+364.151475909" observedRunningTime="2026-01-30 00:16:24.972497526 +0000 UTC m=+365.704836745" watchObservedRunningTime="2026-01-30 00:16:24.975744353 +0000 UTC m=+365.708083612" Jan 30 00:16:25 crc kubenswrapper[5104]: I0130 00:16:25.922150 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h878" event={"ID":"734416dc-0d5e-4b50-a117-bc6e9c8f92b9","Type":"ContainerStarted","Data":"175cb347980805bcaec58fa739a4e7d7d9e18e9800fc1628a11cfcdb4f4ee39d"} Jan 30 00:16:25 crc kubenswrapper[5104]: I0130 00:16:25.925408 5104 generic.go:358] "Generic (PLEG): container finished" podID="ae1cbfae-486a-406a-a607-6e85a313e208" containerID="cf41a0224629c0f32db59dca4739cb0c3ec90664572ef640cacae2ee249a5e35" exitCode=0 Jan 30 00:16:25 crc kubenswrapper[5104]: I0130 00:16:25.927016 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerDied","Data":"cf41a0224629c0f32db59dca4739cb0c3ec90664572ef640cacae2ee249a5e35"} Jan 30 00:16:26 crc kubenswrapper[5104]: I0130 00:16:26.943131 5104 generic.go:358] "Generic (PLEG): container finished" podID="734416dc-0d5e-4b50-a117-bc6e9c8f92b9" containerID="175cb347980805bcaec58fa739a4e7d7d9e18e9800fc1628a11cfcdb4f4ee39d" exitCode=0 Jan 30 00:16:26 crc kubenswrapper[5104]: I0130 00:16:26.943357 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h878" event={"ID":"734416dc-0d5e-4b50-a117-bc6e9c8f92b9","Type":"ContainerDied","Data":"175cb347980805bcaec58fa739a4e7d7d9e18e9800fc1628a11cfcdb4f4ee39d"} Jan 30 00:16:26 crc kubenswrapper[5104]: I0130 00:16:26.959384 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerStarted","Data":"b8a16fbcd16d55491fbd8f16b848b5584f51e8fc8a55969c374d5ab7cf7ad319"} Jan 30 00:16:26 crc kubenswrapper[5104]: I0130 00:16:26.990623 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gjkcz" podStartSLOduration=3.35599245 podStartE2EDuration="3.990600205s" podCreationTimestamp="2026-01-30 00:16:23 +0000 UTC" firstStartedPulling="2026-01-30 00:16:24.91323026 +0000 UTC m=+365.645569489" lastFinishedPulling="2026-01-30 00:16:25.547838025 +0000 UTC m=+366.280177244" observedRunningTime="2026-01-30 00:16:26.98563229 +0000 UTC m=+367.717971519" watchObservedRunningTime="2026-01-30 00:16:26.990600205 +0000 UTC m=+367.722939444" Jan 30 00:16:27 crc kubenswrapper[5104]: I0130 00:16:27.968342 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h878" event={"ID":"734416dc-0d5e-4b50-a117-bc6e9c8f92b9","Type":"ContainerStarted","Data":"acd6da036230816ab5db3987b8da69dab3a112aeb3065bfa20a9643bc9f6e902"} Jan 30 00:16:28 crc kubenswrapper[5104]: I0130 00:16:28.001059 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7h878" podStartSLOduration=4.419065518 podStartE2EDuration="5.001036788s" podCreationTimestamp="2026-01-30 00:16:23 +0000 UTC" firstStartedPulling="2026-01-30 00:16:24.911242667 +0000 UTC m=+365.643581926" lastFinishedPulling="2026-01-30 00:16:25.493213977 +0000 UTC m=+366.225553196" observedRunningTime="2026-01-30 00:16:27.997495031 +0000 UTC m=+368.729834270" watchObservedRunningTime="2026-01-30 00:16:28.001036788 +0000 UTC m=+368.733376047" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.201780 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.204496 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.247088 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.432105 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.432370 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:31 crc kubenswrapper[5104]: I0130 00:16:31.494118 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:32 crc kubenswrapper[5104]: I0130 00:16:32.027408 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdtm9" Jan 30 00:16:32 crc kubenswrapper[5104]: I0130 00:16:32.033731 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rpkrd" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.634686 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.634744 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.689148 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.855496 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.855563 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:33 crc kubenswrapper[5104]: I0130 00:16:33.910567 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:34 crc kubenswrapper[5104]: I0130 00:16:34.059178 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:16:34 crc kubenswrapper[5104]: I0130 00:16:34.070880 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7h878" Jan 30 00:16:44 crc kubenswrapper[5104]: I0130 00:16:44.924733 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vsn4r" Jan 30 00:16:44 crc kubenswrapper[5104]: I0130 00:16:44.986930 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.019668 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" podUID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" containerName="registry" containerID="cri-o://23413498b03e80448a9eab6cf163532a15bffaa68e4449835b274c1e5994a24c" gracePeriod=30 Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.268013 5104 generic.go:358] "Generic (PLEG): container finished" podID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" containerID="23413498b03e80448a9eab6cf163532a15bffaa68e4449835b274c1e5994a24c" exitCode=0 Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.268106 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" event={"ID":"40d2656d-a61b-4aaa-8860-225ca88ac6a7","Type":"ContainerDied","Data":"23413498b03e80448a9eab6cf163532a15bffaa68e4449835b274c1e5994a24c"} Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.511926 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594306 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594373 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8xfb\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594622 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594648 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594685 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594716 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594748 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.594769 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca\") pod \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\" (UID: \"40d2656d-a61b-4aaa-8860-225ca88ac6a7\") " Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.595562 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.595706 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.607105 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.607122 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.607547 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb" (OuterVolumeSpecName: "kube-api-access-p8xfb") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "kube-api-access-p8xfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.608309 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.608667 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.621512 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "40d2656d-a61b-4aaa-8860-225ca88ac6a7" (UID: "40d2656d-a61b-4aaa-8860-225ca88ac6a7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.696372 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8xfb\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-kube-api-access-p8xfb\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.696701 5104 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.696843 5104 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.697038 5104 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40d2656d-a61b-4aaa-8860-225ca88ac6a7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.697167 5104 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40d2656d-a61b-4aaa-8860-225ca88ac6a7-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.697301 5104 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40d2656d-a61b-4aaa-8860-225ca88ac6a7-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:10 crc kubenswrapper[5104]: I0130 00:17:10.697451 5104 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40d2656d-a61b-4aaa-8860-225ca88ac6a7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:11 crc kubenswrapper[5104]: I0130 00:17:11.276486 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" event={"ID":"40d2656d-a61b-4aaa-8860-225ca88ac6a7","Type":"ContainerDied","Data":"2efce453c0d3442309138dc91cc60dc06ec515955ac1e677e2131dfa9aae88f1"} Jan 30 00:17:11 crc kubenswrapper[5104]: I0130 00:17:11.276544 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-lhbqs" Jan 30 00:17:11 crc kubenswrapper[5104]: I0130 00:17:11.276906 5104 scope.go:117] "RemoveContainer" containerID="23413498b03e80448a9eab6cf163532a15bffaa68e4449835b274c1e5994a24c" Jan 30 00:17:11 crc kubenswrapper[5104]: I0130 00:17:11.330113 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:17:11 crc kubenswrapper[5104]: I0130 00:17:11.338624 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-lhbqs"] Jan 30 00:17:12 crc kubenswrapper[5104]: I0130 00:17:12.537608 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" path="/var/lib/kubelet/pods/40d2656d-a61b-4aaa-8860-225ca88ac6a7/volumes" Jan 30 00:17:44 crc kubenswrapper[5104]: I0130 00:17:44.950325 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:44 crc kubenswrapper[5104]: I0130 00:17:44.950978 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.145957 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495538-t48qk"] Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.147226 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.147245 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.147405 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="40d2656d-a61b-4aaa-8860-225ca88ac6a7" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.154361 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.156483 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.157412 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.158918 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.161510 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-t48qk"] Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.230256 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm468\" (UniqueName: \"kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468\") pod \"auto-csr-approver-29495538-t48qk\" (UID: \"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b\") " pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.331190 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm468\" (UniqueName: \"kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468\") pod \"auto-csr-approver-29495538-t48qk\" (UID: \"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b\") " pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.356240 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm468\" (UniqueName: \"kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468\") pod \"auto-csr-approver-29495538-t48qk\" (UID: \"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b\") " pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.474599 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:00 crc kubenswrapper[5104]: I0130 00:18:00.894654 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-t48qk"] Jan 30 00:18:01 crc kubenswrapper[5104]: I0130 00:18:01.618336 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-t48qk" event={"ID":"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b","Type":"ContainerStarted","Data":"82438970fda4d7adc607ebb4af2f713b2211b2ea4b9c2884a7d1e6d26b5f7c30"} Jan 30 00:18:04 crc kubenswrapper[5104]: I0130 00:18:04.483342 5104 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-2wswp" Jan 30 00:18:04 crc kubenswrapper[5104]: I0130 00:18:04.513619 5104 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-2wswp" Jan 30 00:18:04 crc kubenswrapper[5104]: I0130 00:18:04.634522 5104 generic.go:358] "Generic (PLEG): container finished" podID="9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" containerID="27bac406680865d1d5c6ed7d5ce468c8a83db1088e19a1cac083290838eb5eba" exitCode=0 Jan 30 00:18:04 crc kubenswrapper[5104]: I0130 00:18:04.634595 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-t48qk" event={"ID":"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b","Type":"ContainerDied","Data":"27bac406680865d1d5c6ed7d5ce468c8a83db1088e19a1cac083290838eb5eba"} Jan 30 00:18:05 crc kubenswrapper[5104]: I0130 00:18:05.515148 5104 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:13:04 +0000 UTC" deadline="2026-02-25 01:50:53.804303095 +0000 UTC" Jan 30 00:18:05 crc kubenswrapper[5104]: I0130 00:18:05.515188 5104 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="625h32m48.289118541s" Jan 30 00:18:05 crc kubenswrapper[5104]: I0130 00:18:05.870143 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.006021 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm468\" (UniqueName: \"kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468\") pod \"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b\" (UID: \"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b\") " Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.013022 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468" (OuterVolumeSpecName: "kube-api-access-hm468") pod "9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" (UID: "9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b"). InnerVolumeSpecName "kube-api-access-hm468". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.107322 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm468\" (UniqueName: \"kubernetes.io/projected/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b-kube-api-access-hm468\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.515833 5104 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:13:04 +0000 UTC" deadline="2026-02-25 04:27:41.034343598 +0000 UTC" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.516979 5104 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="628h9m34.517395919s" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.652249 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-t48qk" Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.652328 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-t48qk" event={"ID":"9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b","Type":"ContainerDied","Data":"82438970fda4d7adc607ebb4af2f713b2211b2ea4b9c2884a7d1e6d26b5f7c30"} Jan 30 00:18:06 crc kubenswrapper[5104]: I0130 00:18:06.652394 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82438970fda4d7adc607ebb4af2f713b2211b2ea4b9c2884a7d1e6d26b5f7c30" Jan 30 00:18:14 crc kubenswrapper[5104]: I0130 00:18:14.949485 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:18:14 crc kubenswrapper[5104]: I0130 00:18:14.949983 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:18:44 crc kubenswrapper[5104]: I0130 00:18:44.949762 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:18:44 crc kubenswrapper[5104]: I0130 00:18:44.950426 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:18:44 crc kubenswrapper[5104]: I0130 00:18:44.950493 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:18:44 crc kubenswrapper[5104]: I0130 00:18:44.951451 5104 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a"} pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:18:44 crc kubenswrapper[5104]: I0130 00:18:44.951555 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" containerID="cri-o://592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a" gracePeriod=600 Jan 30 00:18:45 crc kubenswrapper[5104]: I0130 00:18:45.911136 5104 generic.go:358] "Generic (PLEG): container finished" podID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerID="592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a" exitCode=0 Jan 30 00:18:45 crc kubenswrapper[5104]: I0130 00:18:45.911220 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerDied","Data":"592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a"} Jan 30 00:18:45 crc kubenswrapper[5104]: I0130 00:18:45.911894 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6"} Jan 30 00:18:45 crc kubenswrapper[5104]: I0130 00:18:45.911911 5104 scope.go:117] "RemoveContainer" containerID="f5b028a088c03809c64529cc57108c79c73124fc91728bb2bfc48406b3351ca6" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.145064 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495540-ns2dd"] Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.146916 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.146948 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.147096 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.156297 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-ns2dd"] Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.156420 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.159122 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.159416 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.159611 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.233781 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8x7w\" (UniqueName: \"kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w\") pod \"auto-csr-approver-29495540-ns2dd\" (UID: \"1de56b55-9735-4835-8a38-2984afa2ebb9\") " pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.334886 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8x7w\" (UniqueName: \"kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w\") pod \"auto-csr-approver-29495540-ns2dd\" (UID: \"1de56b55-9735-4835-8a38-2984afa2ebb9\") " pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.361450 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8x7w\" (UniqueName: \"kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w\") pod \"auto-csr-approver-29495540-ns2dd\" (UID: \"1de56b55-9735-4835-8a38-2984afa2ebb9\") " pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.474300 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:00 crc kubenswrapper[5104]: I0130 00:20:00.715730 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-ns2dd"] Jan 30 00:20:01 crc kubenswrapper[5104]: I0130 00:20:01.426365 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" event={"ID":"1de56b55-9735-4835-8a38-2984afa2ebb9","Type":"ContainerStarted","Data":"c2ab5e7fd987715b821ead65a59e05b87db86fb2375398ef23c564ef364fc773"} Jan 30 00:20:02 crc kubenswrapper[5104]: I0130 00:20:02.433787 5104 generic.go:358] "Generic (PLEG): container finished" podID="1de56b55-9735-4835-8a38-2984afa2ebb9" containerID="1f9893e60cad40dd85400ca575c73eccd2a9cbf08977b2ca04b1a8a9bf1ac997" exitCode=0 Jan 30 00:20:02 crc kubenswrapper[5104]: I0130 00:20:02.433838 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" event={"ID":"1de56b55-9735-4835-8a38-2984afa2ebb9","Type":"ContainerDied","Data":"1f9893e60cad40dd85400ca575c73eccd2a9cbf08977b2ca04b1a8a9bf1ac997"} Jan 30 00:20:03 crc kubenswrapper[5104]: I0130 00:20:03.762514 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:03 crc kubenswrapper[5104]: I0130 00:20:03.877973 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8x7w\" (UniqueName: \"kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w\") pod \"1de56b55-9735-4835-8a38-2984afa2ebb9\" (UID: \"1de56b55-9735-4835-8a38-2984afa2ebb9\") " Jan 30 00:20:03 crc kubenswrapper[5104]: I0130 00:20:03.884724 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w" (OuterVolumeSpecName: "kube-api-access-h8x7w") pod "1de56b55-9735-4835-8a38-2984afa2ebb9" (UID: "1de56b55-9735-4835-8a38-2984afa2ebb9"). InnerVolumeSpecName "kube-api-access-h8x7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:03 crc kubenswrapper[5104]: I0130 00:20:03.979383 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8x7w\" (UniqueName: \"kubernetes.io/projected/1de56b55-9735-4835-8a38-2984afa2ebb9-kube-api-access-h8x7w\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:04 crc kubenswrapper[5104]: I0130 00:20:04.449387 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" Jan 30 00:20:04 crc kubenswrapper[5104]: I0130 00:20:04.449462 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-ns2dd" event={"ID":"1de56b55-9735-4835-8a38-2984afa2ebb9","Type":"ContainerDied","Data":"c2ab5e7fd987715b821ead65a59e05b87db86fb2375398ef23c564ef364fc773"} Jan 30 00:20:04 crc kubenswrapper[5104]: I0130 00:20:04.449501 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ab5e7fd987715b821ead65a59e05b87db86fb2375398ef23c564ef364fc773" Jan 30 00:20:20 crc kubenswrapper[5104]: I0130 00:20:20.772827 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:20:20 crc kubenswrapper[5104]: I0130 00:20:20.775563 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.432775 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj"] Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.433658 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="kube-rbac-proxy" containerID="cri-o://30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.434110 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="ovnkube-cluster-manager" containerID="cri-o://ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.629132 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.664537 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx"] Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665237 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="kube-rbac-proxy" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665257 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="kube-rbac-proxy" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665283 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="ovnkube-cluster-manager" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665290 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="ovnkube-cluster-manager" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665311 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1de56b55-9735-4835-8a38-2984afa2ebb9" containerName="oc" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665319 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de56b55-9735-4835-8a38-2984afa2ebb9" containerName="oc" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665427 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="1de56b55-9735-4835-8a38-2984afa2ebb9" containerName="oc" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665442 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="ovnkube-cluster-manager" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.665450 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerName="kube-rbac-proxy" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.672072 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.710732 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dr5dp"] Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711381 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-controller" containerID="cri-o://00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711437 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711521 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-acl-logging" containerID="cri-o://7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711531 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="northd" containerID="cri-o://a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711510 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-node" containerID="cri-o://fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711573 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="sbdb" containerID="cri-o://40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.711436 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="nbdb" containerID="cri-o://5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.784349 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovnkube-controller" containerID="cri-o://bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" gracePeriod=30 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.784600 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides\") pod \"925f8c53-ccbf-4f3c-a811-4d64d678e217\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.784713 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config\") pod \"925f8c53-ccbf-4f3c-a811-4d64d678e217\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.784785 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert\") pod \"925f8c53-ccbf-4f3c-a811-4d64d678e217\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.784805 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vhr\" (UniqueName: \"kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr\") pod \"925f8c53-ccbf-4f3c-a811-4d64d678e217\" (UID: \"925f8c53-ccbf-4f3c-a811-4d64d678e217\") " Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.785762 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f8c53-ccbf-4f3c-a811-4d64d678e217" (UID: "925f8c53-ccbf-4f3c-a811-4d64d678e217"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.785980 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f8c53-ccbf-4f3c-a811-4d64d678e217" (UID: "925f8c53-ccbf-4f3c-a811-4d64d678e217"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.792051 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f8c53-ccbf-4f3c-a811-4d64d678e217" (UID: "925f8c53-ccbf-4f3c-a811-4d64d678e217"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797223 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797299 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797378 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797473 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7gr\" (UniqueName: \"kubernetes.io/projected/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-kube-api-access-fx7gr\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797559 5104 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797570 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.797579 5104 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f8c53-ccbf-4f3c-a811-4d64d678e217-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.799276 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr" (OuterVolumeSpecName: "kube-api-access-t4vhr") pod "925f8c53-ccbf-4f3c-a811-4d64d678e217" (UID: "925f8c53-ccbf-4f3c-a811-4d64d678e217"). InnerVolumeSpecName "kube-api-access-t4vhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803712 5104 generic.go:358] "Generic (PLEG): container finished" podID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerID="ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" exitCode=0 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803734 5104 generic.go:358] "Generic (PLEG): container finished" podID="925f8c53-ccbf-4f3c-a811-4d64d678e217" containerID="30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" exitCode=0 Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803896 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerDied","Data":"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21"} Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803947 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerDied","Data":"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88"} Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803957 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" event={"ID":"925f8c53-ccbf-4f3c-a811-4d64d678e217","Type":"ContainerDied","Data":"6b26fb8323c1fa1f41e9b6f71949dc98a78c2d69b578258e6da79a7d9af02855"} Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.803974 5104 scope.go:117] "RemoveContainer" containerID="ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.804123 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.884225 5104 scope.go:117] "RemoveContainer" containerID="30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.886036 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj"] Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.890615 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-zg4cj"] Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896228 5104 scope.go:117] "RemoveContainer" containerID="ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" Jan 30 00:20:57 crc kubenswrapper[5104]: E0130 00:20:57.896536 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21\": container with ID starting with ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21 not found: ID does not exist" containerID="ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896568 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21"} err="failed to get container status \"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21\": rpc error: code = NotFound desc = could not find container \"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21\": container with ID starting with ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21 not found: ID does not exist" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896585 5104 scope.go:117] "RemoveContainer" containerID="30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" Jan 30 00:20:57 crc kubenswrapper[5104]: E0130 00:20:57.896746 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88\": container with ID starting with 30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88 not found: ID does not exist" containerID="30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896786 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88"} err="failed to get container status \"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88\": rpc error: code = NotFound desc = could not find container \"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88\": container with ID starting with 30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88 not found: ID does not exist" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896799 5104 scope.go:117] "RemoveContainer" containerID="ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896948 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21"} err="failed to get container status \"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21\": rpc error: code = NotFound desc = could not find container \"ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21\": container with ID starting with ea26defff0da85d93df6b1790036090f8ff48e30049b02c4afaf12e066b80d21 not found: ID does not exist" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.896966 5104 scope.go:117] "RemoveContainer" containerID="30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.897204 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88"} err="failed to get container status \"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88\": rpc error: code = NotFound desc = could not find container \"30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88\": container with ID starting with 30e984e03687bc4563d8f4925fb59591eba19addcfe856deb3f4dac7ef260a88 not found: ID does not exist" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.898772 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.898811 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.898902 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fx7gr\" (UniqueName: \"kubernetes.io/projected/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-kube-api-access-fx7gr\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.898990 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.899044 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4vhr\" (UniqueName: \"kubernetes.io/projected/925f8c53-ccbf-4f3c-a811-4d64d678e217-kube-api-access-t4vhr\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.899516 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.899608 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.904316 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.915193 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx7gr\" (UniqueName: \"kubernetes.io/projected/f47b7290-ef21-4ae7-ac34-041e6e7a2d89-kube-api-access-fx7gr\") pod \"ovnkube-control-plane-97c9b6c48-5mjmx\" (UID: \"f47b7290-ef21-4ae7-ac34-041e6e7a2d89\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:57 crc kubenswrapper[5104]: I0130 00:20:57.985105 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" Jan 30 00:20:58 crc kubenswrapper[5104]: W0130 00:20:58.007567 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf47b7290_ef21_4ae7_ac34_041e6e7a2d89.slice/crio-736ab0a02295bcbbcdab3d6a6e78a1a34c40e2ae4a1d5e87d454969e9d4a86a3 WatchSource:0}: Error finding container 736ab0a02295bcbbcdab3d6a6e78a1a34c40e2ae4a1d5e87d454969e9d4a86a3: Status 404 returned error can't find the container with id 736ab0a02295bcbbcdab3d6a6e78a1a34c40e2ae4a1d5e87d454969e9d4a86a3 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.009701 5104 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.075692 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dr5dp_4dd9b451-9f5e-4822-b340-7557a89a3ce0/ovn-acl-logging/0.log" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.076303 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dr5dp_4dd9b451-9f5e-4822-b340-7557a89a3ce0/ovn-controller/0.log" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.077051 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101471 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101752 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkmsn\" (UniqueName: \"kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101771 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101803 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101816 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101832 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101575 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101874 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101892 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101917 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101939 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101964 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101982 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102028 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102046 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102098 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102139 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102153 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102170 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102634 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102706 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102748 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket\") pod \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\" (UID: \"4dd9b451-9f5e-4822-b340-7557a89a3ce0\") " Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101977 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102000 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.101992 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102027 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102868 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102884 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket" (OuterVolumeSpecName: "log-socket") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102048 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash" (OuterVolumeSpecName: "host-slash") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102681 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102703 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log" (OuterVolumeSpecName: "node-log") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102730 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102740 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102780 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.102800 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103093 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103376 5104 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103398 5104 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103413 5104 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103427 5104 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103443 5104 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103458 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103470 5104 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103480 5104 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103492 5104 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103502 5104 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103512 5104 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103522 5104 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103532 5104 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103542 5104 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103553 5104 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103562 5104 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.103700 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.106704 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn" (OuterVolumeSpecName: "kube-api-access-qkmsn") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "kube-api-access-qkmsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.109228 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.118569 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4dd9b451-9f5e-4822-b340-7557a89a3ce0" (UID: "4dd9b451-9f5e-4822-b340-7557a89a3ce0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.128293 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dfzjm"] Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129247 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-node" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129273 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-node" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129293 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129302 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129314 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="nbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129320 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="nbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129332 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="northd" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129339 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="northd" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129346 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129352 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129364 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kubecfg-setup" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129371 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kubecfg-setup" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129385 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovnkube-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129391 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovnkube-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129400 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-acl-logging" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129407 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-acl-logging" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129416 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="sbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129422 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="sbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129522 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129537 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-node" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129547 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="sbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129556 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovn-acl-logging" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129564 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="ovnkube-controller" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129573 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129579 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="northd" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.129587 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerName="nbdb" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.135352 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204185 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-etc-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204302 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4f7r\" (UniqueName: \"kubernetes.io/projected/36849c5d-68a1-48dd-82b4-102cb89557e3-kube-api-access-t4f7r\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204375 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-bin\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204403 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36849c5d-68a1-48dd-82b4-102cb89557e3-ovn-node-metrics-cert\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204448 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-var-lib-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204531 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-systemd-units\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204556 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204577 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204601 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-netd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204649 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-config\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204678 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-script-lib\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204698 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-env-overrides\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204771 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-ovn\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204826 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-log-socket\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.204875 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-kubelet\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205006 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-slash\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205036 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-systemd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205069 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-netns\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205142 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-node-log\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205172 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205273 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qkmsn\" (UniqueName: \"kubernetes.io/projected/4dd9b451-9f5e-4822-b340-7557a89a3ce0-kube-api-access-qkmsn\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205295 5104 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205308 5104 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4dd9b451-9f5e-4822-b340-7557a89a3ce0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.205323 5104 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4dd9b451-9f5e-4822-b340-7557a89a3ce0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306544 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-script-lib\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306600 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-env-overrides\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306639 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-ovn\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306683 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-log-socket\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306725 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-kubelet\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306764 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-slash\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.306795 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-systemd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307107 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-ovn\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307143 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-slash\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307264 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-netns\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307314 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-node-log\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307349 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307367 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-netns\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307441 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-etc-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307484 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307354 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-node-log\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307490 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4f7r\" (UniqueName: \"kubernetes.io/projected/36849c5d-68a1-48dd-82b4-102cb89557e3-kube-api-access-t4f7r\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307548 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-etc-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307554 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-bin\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307584 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36849c5d-68a1-48dd-82b4-102cb89557e3-ovn-node-metrics-cert\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307614 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-var-lib-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307651 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-systemd-units\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307682 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307721 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307775 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-netd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307827 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-config\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.307932 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-kubelet\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308017 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-var-lib-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308036 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-env-overrides\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308086 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-netd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308063 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-openvswitch\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308119 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308118 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-script-lib\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308196 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-systemd-units\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308219 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-host-cni-bin\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308226 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-run-systemd\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308641 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36849c5d-68a1-48dd-82b4-102cb89557e3-ovnkube-config\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.308795 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36849c5d-68a1-48dd-82b4-102cb89557e3-log-socket\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.316020 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36849c5d-68a1-48dd-82b4-102cb89557e3-ovn-node-metrics-cert\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.326841 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4f7r\" (UniqueName: \"kubernetes.io/projected/36849c5d-68a1-48dd-82b4-102cb89557e3-kube-api-access-t4f7r\") pod \"ovnkube-node-dfzjm\" (UID: \"36849c5d-68a1-48dd-82b4-102cb89557e3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.455907 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:20:58 crc kubenswrapper[5104]: W0130 00:20:58.470701 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36849c5d_68a1_48dd_82b4_102cb89557e3.slice/crio-45ce159d72283617066be95c65b316a2384ea2d338dc5d75f0e3d75faa95a310 WatchSource:0}: Error finding container 45ce159d72283617066be95c65b316a2384ea2d338dc5d75f0e3d75faa95a310: Status 404 returned error can't find the container with id 45ce159d72283617066be95c65b316a2384ea2d338dc5d75f0e3d75faa95a310 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.531350 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f8c53-ccbf-4f3c-a811-4d64d678e217" path="/var/lib/kubelet/pods/925f8c53-ccbf-4f3c-a811-4d64d678e217/volumes" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.815788 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dr5dp_4dd9b451-9f5e-4822-b340-7557a89a3ce0/ovn-acl-logging/0.log" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.816477 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dr5dp_4dd9b451-9f5e-4822-b340-7557a89a3ce0/ovn-controller/0.log" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817091 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817143 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817163 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817181 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817197 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817215 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817232 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" exitCode=143 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817249 5104 generic.go:358] "Generic (PLEG): container finished" podID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" exitCode=143 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.817627 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819019 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819067 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819090 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819109 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819129 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819144 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819159 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819173 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819181 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819193 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819192 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819205 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819365 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819383 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819391 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819398 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819415 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819422 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819428 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819434 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819457 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819486 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819497 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819503 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819511 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819517 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819523 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819529 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819534 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819540 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819549 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dr5dp" event={"ID":"4dd9b451-9f5e-4822-b340-7557a89a3ce0","Type":"ContainerDied","Data":"5baac66e9d4e1ec5572319df2731c814892b2924f32a077f78cd3f4ac1cc77f7"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819559 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819565 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819570 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819576 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819582 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819587 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819593 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819598 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.819604 5104 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.822738 5104 generic.go:358] "Generic (PLEG): container finished" podID="36849c5d-68a1-48dd-82b4-102cb89557e3" containerID="606a41fb1ef48877b6913fd82d8a116e5138ddda646eed3b4228f6b5d6374c19" exitCode=0 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.822940 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerDied","Data":"606a41fb1ef48877b6913fd82d8a116e5138ddda646eed3b4228f6b5d6374c19"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.823019 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"45ce159d72283617066be95c65b316a2384ea2d338dc5d75f0e3d75faa95a310"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.825191 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" event={"ID":"f47b7290-ef21-4ae7-ac34-041e6e7a2d89","Type":"ContainerStarted","Data":"e2a79f42e481bb501dc9625530351f857786e39dbe668f2356e86fd11e86d65b"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.825234 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" event={"ID":"f47b7290-ef21-4ae7-ac34-041e6e7a2d89","Type":"ContainerStarted","Data":"bd72146e1a344624fccfbfa82077216847ce0fad9b4db5c0daaf4fdba444e61f"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.825247 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" event={"ID":"f47b7290-ef21-4ae7-ac34-041e6e7a2d89","Type":"ContainerStarted","Data":"736ab0a02295bcbbcdab3d6a6e78a1a34c40e2ae4a1d5e87d454969e9d4a86a3"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.828428 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.828489 5104 generic.go:358] "Generic (PLEG): container finished" podID="3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f" containerID="33a5e4f0b9727f64dc777e52dfe8a3658603775c843c9fdba0764b55e730ba77" exitCode=2 Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.828644 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bk79c" event={"ID":"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f","Type":"ContainerDied","Data":"33a5e4f0b9727f64dc777e52dfe8a3658603775c843c9fdba0764b55e730ba77"} Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.829227 5104 scope.go:117] "RemoveContainer" containerID="33a5e4f0b9727f64dc777e52dfe8a3658603775c843c9fdba0764b55e730ba77" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.840571 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.847657 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dr5dp"] Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.857611 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dr5dp"] Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.871440 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.887277 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.901694 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-5mjmx" podStartSLOduration=1.901666774 podStartE2EDuration="1.901666774s" podCreationTimestamp="2026-01-30 00:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:20:58.861273687 +0000 UTC m=+639.593612916" watchObservedRunningTime="2026-01-30 00:20:58.901666774 +0000 UTC m=+639.634006013" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.934906 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.951100 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.965652 5104 scope.go:117] "RemoveContainer" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.977913 5104 scope.go:117] "RemoveContainer" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:58 crc kubenswrapper[5104]: I0130 00:20:58.994511 5104 scope.go:117] "RemoveContainer" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.013020 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.017638 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.017677 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} err="failed to get container status \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.017705 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.017992 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018017 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} err="failed to get container status \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018035 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.018289 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018313 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} err="failed to get container status \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018330 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.018518 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018539 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} err="failed to get container status \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018554 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.018726 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018747 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} err="failed to get container status \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.018761 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.019057 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019077 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} err="failed to get container status \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019092 5104 scope.go:117] "RemoveContainer" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.019244 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": container with ID starting with 7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759 not found: ID does not exist" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019265 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} err="failed to get container status \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": rpc error: code = NotFound desc = could not find container \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": container with ID starting with 7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019283 5104 scope.go:117] "RemoveContainer" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.019430 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": container with ID starting with 00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9 not found: ID does not exist" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019450 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} err="failed to get container status \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": rpc error: code = NotFound desc = could not find container \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": container with ID starting with 00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019467 5104 scope.go:117] "RemoveContainer" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: E0130 00:20:59.019873 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": container with ID starting with d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e not found: ID does not exist" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019896 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} err="failed to get container status \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": rpc error: code = NotFound desc = could not find container \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": container with ID starting with d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.019912 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020280 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} err="failed to get container status \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020300 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020511 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} err="failed to get container status \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020534 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020809 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} err="failed to get container status \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.020862 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021086 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} err="failed to get container status \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021165 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021399 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} err="failed to get container status \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021494 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021760 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} err="failed to get container status \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.021781 5104 scope.go:117] "RemoveContainer" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022018 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} err="failed to get container status \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": rpc error: code = NotFound desc = could not find container \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": container with ID starting with 7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022066 5104 scope.go:117] "RemoveContainer" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022341 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} err="failed to get container status \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": rpc error: code = NotFound desc = could not find container \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": container with ID starting with 00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022360 5104 scope.go:117] "RemoveContainer" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022566 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} err="failed to get container status \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": rpc error: code = NotFound desc = could not find container \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": container with ID starting with d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022589 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022934 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} err="failed to get container status \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.022966 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.023274 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} err="failed to get container status \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.023299 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.023542 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} err="failed to get container status \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.023577 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.024055 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} err="failed to get container status \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.024084 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.024295 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} err="failed to get container status \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.024318 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.025095 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} err="failed to get container status \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.025121 5104 scope.go:117] "RemoveContainer" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.025816 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} err="failed to get container status \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": rpc error: code = NotFound desc = could not find container \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": container with ID starting with 7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.025838 5104 scope.go:117] "RemoveContainer" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.027364 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} err="failed to get container status \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": rpc error: code = NotFound desc = could not find container \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": container with ID starting with 00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.027399 5104 scope.go:117] "RemoveContainer" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.027692 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} err="failed to get container status \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": rpc error: code = NotFound desc = could not find container \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": container with ID starting with d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.027714 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.027997 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} err="failed to get container status \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028023 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028379 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} err="failed to get container status \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028403 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028610 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} err="failed to get container status \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028634 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028810 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} err="failed to get container status \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.028830 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029077 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} err="failed to get container status \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029103 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029329 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} err="failed to get container status \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029350 5104 scope.go:117] "RemoveContainer" containerID="7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029654 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759"} err="failed to get container status \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": rpc error: code = NotFound desc = could not find container \"7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759\": container with ID starting with 7f84773ad1561f2468a7b9e564afe4da5466479c27455daffc938f275cdd8759 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.029674 5104 scope.go:117] "RemoveContainer" containerID="00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.030979 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9"} err="failed to get container status \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": rpc error: code = NotFound desc = could not find container \"00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9\": container with ID starting with 00bbd3826abc367fb2f43d9978de5b27a990cf044f166f1cfa8c32768845c5b9 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.030998 5104 scope.go:117] "RemoveContainer" containerID="d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031240 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e"} err="failed to get container status \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": rpc error: code = NotFound desc = could not find container \"d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e\": container with ID starting with d7f812057015c58485935e00c46076e7de69cc3dc4225f385b58a501c508194e not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031285 5104 scope.go:117] "RemoveContainer" containerID="bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031495 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef"} err="failed to get container status \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": rpc error: code = NotFound desc = could not find container \"bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef\": container with ID starting with bbff92e295524c14ff63f4ec0ed873547af60dd65283e33a4cadf5ad7004edef not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031512 5104 scope.go:117] "RemoveContainer" containerID="40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031730 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524"} err="failed to get container status \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": rpc error: code = NotFound desc = could not find container \"40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524\": container with ID starting with 40d298f83de0c723a4a838843795e35036effc7bfbb77beb389a48bdc25f8524 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031743 5104 scope.go:117] "RemoveContainer" containerID="5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031907 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86"} err="failed to get container status \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": rpc error: code = NotFound desc = could not find container \"5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86\": container with ID starting with 5f894851ec70fd49996a95dc6ef315eed6b5ab4b80619be9c78b9bba34ea1f86 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.031920 5104 scope.go:117] "RemoveContainer" containerID="a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.032116 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547"} err="failed to get container status \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": rpc error: code = NotFound desc = could not find container \"a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547\": container with ID starting with a2094fbdb67d319df30745be54abcc7fd896a74987d488212e62dd2591fc4547 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.032128 5104 scope.go:117] "RemoveContainer" containerID="200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.032276 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b"} err="failed to get container status \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": rpc error: code = NotFound desc = could not find container \"200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b\": container with ID starting with 200dcc3840f55cbd23f7b942c041eac6fbc6806b7f71b56e4670f5259d00c70b not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.032289 5104 scope.go:117] "RemoveContainer" containerID="fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.032443 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505"} err="failed to get container status \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": rpc error: code = NotFound desc = could not find container \"fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505\": container with ID starting with fcf28f348ed83bc59a65151c45ade27274d1c84f9d8088daf406794c698be505 not found: ID does not exist" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841361 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"d937a37762ec142846d878dec6d8c908ef85fe54470d7fee37a4c93b65f5e64c"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841657 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"314c6f6f7d703621d4b64340f4dda9f2506838836cb0b942a580d55f043aec8f"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841670 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"602583003af2d0099b10b9be7d3d6fe07e98a1ddd2efa1178961bc7411b44986"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841678 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"f87418658250eb48526822d9b0ca1397b91dbb0db9a69a710234ca460dc37bcb"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841685 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"d41de59be3f9ce232e06f7b1b4418d6d2681160819af0079f6d10ead5ee34c4a"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.841693 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"a15da959d3b6c95ad8757562e74094630241f47a510a00cad332cf1620985c9b"} Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.844201 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:20:59 crc kubenswrapper[5104]: I0130 00:20:59.844297 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bk79c" event={"ID":"3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f","Type":"ContainerStarted","Data":"7a332c5e669ed059d80e1194241f88f59741d521e225c45c3a1325a509135bfc"} Jan 30 00:21:00 crc kubenswrapper[5104]: I0130 00:21:00.555806 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd9b451-9f5e-4822-b340-7557a89a3ce0" path="/var/lib/kubelet/pods/4dd9b451-9f5e-4822-b340-7557a89a3ce0/volumes" Jan 30 00:21:01 crc kubenswrapper[5104]: I0130 00:21:01.864986 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"a14d3277dde7fac8087af25aa47229da8eee67cd42349b4706f77c3d2256454a"} Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.883803 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" event={"ID":"36849c5d-68a1-48dd-82b4-102cb89557e3","Type":"ContainerStarted","Data":"33b0835148e1ec093b34933d3aa486f947c32ea325219f95b11468603ed8506a"} Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.884186 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.884203 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.884215 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.919637 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.922143 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" podStartSLOduration=6.922127273 podStartE2EDuration="6.922127273s" podCreationTimestamp="2026-01-30 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:21:04.917343285 +0000 UTC m=+645.649682504" watchObservedRunningTime="2026-01-30 00:21:04.922127273 +0000 UTC m=+645.654466502" Jan 30 00:21:04 crc kubenswrapper[5104]: I0130 00:21:04.926105 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:14 crc kubenswrapper[5104]: I0130 00:21:14.949977 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:21:14 crc kubenswrapper[5104]: I0130 00:21:14.950593 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:21:36 crc kubenswrapper[5104]: I0130 00:21:36.933616 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dfzjm" Jan 30 00:21:44 crc kubenswrapper[5104]: I0130 00:21:44.950352 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:21:44 crc kubenswrapper[5104]: I0130 00:21:44.951040 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.138336 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495542-k7g2r"] Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.149155 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-k7g2r"] Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.149316 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.152114 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.152152 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.152363 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.242630 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxlf\" (UniqueName: \"kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf\") pod \"auto-csr-approver-29495542-k7g2r\" (UID: \"f3744f5e-251f-466b-8b04-bee4b3c6d743\") " pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.343725 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxlf\" (UniqueName: \"kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf\") pod \"auto-csr-approver-29495542-k7g2r\" (UID: \"f3744f5e-251f-466b-8b04-bee4b3c6d743\") " pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.368242 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxlf\" (UniqueName: \"kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf\") pod \"auto-csr-approver-29495542-k7g2r\" (UID: \"f3744f5e-251f-466b-8b04-bee4b3c6d743\") " pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.472959 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:00 crc kubenswrapper[5104]: I0130 00:22:00.688605 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-k7g2r"] Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.095096 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.095515 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gjkcz" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="registry-server" containerID="cri-o://b8a16fbcd16d55491fbd8f16b848b5584f51e8fc8a55969c374d5ab7cf7ad319" gracePeriod=30 Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.250017 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" event={"ID":"f3744f5e-251f-466b-8b04-bee4b3c6d743","Type":"ContainerStarted","Data":"74ef1cadd2148f423682d3145499ad7027e9b4fe091fbcbf88f29f79c46b051c"} Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.253121 5104 generic.go:358] "Generic (PLEG): container finished" podID="ae1cbfae-486a-406a-a607-6e85a313e208" containerID="b8a16fbcd16d55491fbd8f16b848b5584f51e8fc8a55969c374d5ab7cf7ad319" exitCode=0 Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.253203 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerDied","Data":"b8a16fbcd16d55491fbd8f16b848b5584f51e8fc8a55969c374d5ab7cf7ad319"} Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.410427 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.462653 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz4g7\" (UniqueName: \"kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7\") pod \"ae1cbfae-486a-406a-a607-6e85a313e208\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.462776 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities\") pod \"ae1cbfae-486a-406a-a607-6e85a313e208\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.462823 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content\") pod \"ae1cbfae-486a-406a-a607-6e85a313e208\" (UID: \"ae1cbfae-486a-406a-a607-6e85a313e208\") " Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.464729 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities" (OuterVolumeSpecName: "utilities") pod "ae1cbfae-486a-406a-a607-6e85a313e208" (UID: "ae1cbfae-486a-406a-a607-6e85a313e208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.469022 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7" (OuterVolumeSpecName: "kube-api-access-pz4g7") pod "ae1cbfae-486a-406a-a607-6e85a313e208" (UID: "ae1cbfae-486a-406a-a607-6e85a313e208"). InnerVolumeSpecName "kube-api-access-pz4g7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.476025 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae1cbfae-486a-406a-a607-6e85a313e208" (UID: "ae1cbfae-486a-406a-a607-6e85a313e208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.564288 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.564667 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1cbfae-486a-406a-a607-6e85a313e208-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:01 crc kubenswrapper[5104]: I0130 00:22:01.564813 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pz4g7\" (UniqueName: \"kubernetes.io/projected/ae1cbfae-486a-406a-a607-6e85a313e208-kube-api-access-pz4g7\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.260046 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjkcz" event={"ID":"ae1cbfae-486a-406a-a607-6e85a313e208","Type":"ContainerDied","Data":"503536269a595c3bbd6c8a11499d1c6324efd95f5638c84f340fcd363ff6a2cf"} Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.260055 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjkcz" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.261445 5104 scope.go:117] "RemoveContainer" containerID="b8a16fbcd16d55491fbd8f16b848b5584f51e8fc8a55969c374d5ab7cf7ad319" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.261626 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" event={"ID":"f3744f5e-251f-466b-8b04-bee4b3c6d743","Type":"ContainerStarted","Data":"91ea0be2dc6b0ff1e2a1c098de61d64132d3a30b9213c786579f63b5c0e824ec"} Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.283554 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" podStartSLOduration=1.249715084 podStartE2EDuration="2.283537504s" podCreationTimestamp="2026-01-30 00:22:00 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.701087215 +0000 UTC m=+701.433426444" lastFinishedPulling="2026-01-30 00:22:01.734909605 +0000 UTC m=+702.467248864" observedRunningTime="2026-01-30 00:22:02.281554291 +0000 UTC m=+703.013893520" watchObservedRunningTime="2026-01-30 00:22:02.283537504 +0000 UTC m=+703.015876723" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.291015 5104 scope.go:117] "RemoveContainer" containerID="cf41a0224629c0f32db59dca4739cb0c3ec90664572ef640cacae2ee249a5e35" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.299457 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.303923 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjkcz"] Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.309713 5104 scope.go:117] "RemoveContainer" containerID="93a0cf12b7a4597bd46ab35ae2950102a4d5b3554fd6785dd30b7f50e267a829" Jan 30 00:22:02 crc kubenswrapper[5104]: I0130 00:22:02.546078 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" path="/var/lib/kubelet/pods/ae1cbfae-486a-406a-a607-6e85a313e208/volumes" Jan 30 00:22:03 crc kubenswrapper[5104]: I0130 00:22:03.271821 5104 generic.go:358] "Generic (PLEG): container finished" podID="f3744f5e-251f-466b-8b04-bee4b3c6d743" containerID="91ea0be2dc6b0ff1e2a1c098de61d64132d3a30b9213c786579f63b5c0e824ec" exitCode=0 Jan 30 00:22:03 crc kubenswrapper[5104]: I0130 00:22:03.271940 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" event={"ID":"f3744f5e-251f-466b-8b04-bee4b3c6d743","Type":"ContainerDied","Data":"91ea0be2dc6b0ff1e2a1c098de61d64132d3a30b9213c786579f63b5c0e824ec"} Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.482913 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.603822 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sxlf\" (UniqueName: \"kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf\") pod \"f3744f5e-251f-466b-8b04-bee4b3c6d743\" (UID: \"f3744f5e-251f-466b-8b04-bee4b3c6d743\") " Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.609245 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf" (OuterVolumeSpecName: "kube-api-access-8sxlf") pod "f3744f5e-251f-466b-8b04-bee4b3c6d743" (UID: "f3744f5e-251f-466b-8b04-bee4b3c6d743"). InnerVolumeSpecName "kube-api-access-8sxlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.706112 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8sxlf\" (UniqueName: \"kubernetes.io/projected/f3744f5e-251f-466b-8b04-bee4b3c6d743-kube-api-access-8sxlf\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.924416 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49"] Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925108 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="extract-utilities" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925131 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="extract-utilities" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925153 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3744f5e-251f-466b-8b04-bee4b3c6d743" containerName="oc" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925160 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3744f5e-251f-466b-8b04-bee4b3c6d743" containerName="oc" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925181 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="registry-server" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925189 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="registry-server" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925215 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="extract-content" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925223 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="extract-content" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925333 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="f3744f5e-251f-466b-8b04-bee4b3c6d743" containerName="oc" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.925355 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae1cbfae-486a-406a-a607-6e85a313e208" containerName="registry-server" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.937059 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49"] Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.937230 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:04 crc kubenswrapper[5104]: I0130 00:22:04.939951 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.010010 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.010223 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.010299 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wml8q\" (UniqueName: \"kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.112052 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.112322 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.112390 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wml8q\" (UniqueName: \"kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.113826 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.114085 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.141591 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wml8q\" (UniqueName: \"kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.262296 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.291318 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" event={"ID":"f3744f5e-251f-466b-8b04-bee4b3c6d743","Type":"ContainerDied","Data":"74ef1cadd2148f423682d3145499ad7027e9b4fe091fbcbf88f29f79c46b051c"} Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.291370 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ef1cadd2148f423682d3145499ad7027e9b4fe091fbcbf88f29f79c46b051c" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.291397 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-k7g2r" Jan 30 00:22:05 crc kubenswrapper[5104]: I0130 00:22:05.675482 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49"] Jan 30 00:22:05 crc kubenswrapper[5104]: W0130 00:22:05.680396 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae706ea_d078_41e6_86b2_7dc023d77808.slice/crio-2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e WatchSource:0}: Error finding container 2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e: Status 404 returned error can't find the container with id 2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e Jan 30 00:22:06 crc kubenswrapper[5104]: I0130 00:22:06.302424 5104 generic.go:358] "Generic (PLEG): container finished" podID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerID="b39f5f49b8cd1993cf234c35dee905c0d341372ac9e01d050a9fa034010a6d1e" exitCode=0 Jan 30 00:22:06 crc kubenswrapper[5104]: I0130 00:22:06.302481 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" event={"ID":"8ae706ea-d078-41e6-86b2-7dc023d77808","Type":"ContainerDied","Data":"b39f5f49b8cd1993cf234c35dee905c0d341372ac9e01d050a9fa034010a6d1e"} Jan 30 00:22:06 crc kubenswrapper[5104]: I0130 00:22:06.302543 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" event={"ID":"8ae706ea-d078-41e6-86b2-7dc023d77808","Type":"ContainerStarted","Data":"2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e"} Jan 30 00:22:08 crc kubenswrapper[5104]: I0130 00:22:08.318362 5104 generic.go:358] "Generic (PLEG): container finished" podID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerID="ded82d99b92fb902cbe68c5da01bcc4e96da86a83d5ee35bceaef26f64a9cbcd" exitCode=0 Jan 30 00:22:08 crc kubenswrapper[5104]: I0130 00:22:08.318452 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" event={"ID":"8ae706ea-d078-41e6-86b2-7dc023d77808","Type":"ContainerDied","Data":"ded82d99b92fb902cbe68c5da01bcc4e96da86a83d5ee35bceaef26f64a9cbcd"} Jan 30 00:22:09 crc kubenswrapper[5104]: I0130 00:22:09.332037 5104 generic.go:358] "Generic (PLEG): container finished" podID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerID="810babc6ab16583db6ecb9e15ec10b688056862b4fda04272e0c9f64d3772a72" exitCode=0 Jan 30 00:22:09 crc kubenswrapper[5104]: I0130 00:22:09.332164 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" event={"ID":"8ae706ea-d078-41e6-86b2-7dc023d77808","Type":"ContainerDied","Data":"810babc6ab16583db6ecb9e15ec10b688056862b4fda04272e0c9f64d3772a72"} Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.627963 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.786189 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wml8q\" (UniqueName: \"kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q\") pod \"8ae706ea-d078-41e6-86b2-7dc023d77808\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.786294 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle\") pod \"8ae706ea-d078-41e6-86b2-7dc023d77808\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.786415 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util\") pod \"8ae706ea-d078-41e6-86b2-7dc023d77808\" (UID: \"8ae706ea-d078-41e6-86b2-7dc023d77808\") " Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.791720 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle" (OuterVolumeSpecName: "bundle") pod "8ae706ea-d078-41e6-86b2-7dc023d77808" (UID: "8ae706ea-d078-41e6-86b2-7dc023d77808"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.794294 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q" (OuterVolumeSpecName: "kube-api-access-wml8q") pod "8ae706ea-d078-41e6-86b2-7dc023d77808" (UID: "8ae706ea-d078-41e6-86b2-7dc023d77808"). InnerVolumeSpecName "kube-api-access-wml8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.799903 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util" (OuterVolumeSpecName: "util") pod "8ae706ea-d078-41e6-86b2-7dc023d77808" (UID: "8ae706ea-d078-41e6-86b2-7dc023d77808"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.888229 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wml8q\" (UniqueName: \"kubernetes.io/projected/8ae706ea-d078-41e6-86b2-7dc023d77808-kube-api-access-wml8q\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.888267 5104 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:10 crc kubenswrapper[5104]: I0130 00:22:10.888279 5104 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ae706ea-d078-41e6-86b2-7dc023d77808-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:11 crc kubenswrapper[5104]: I0130 00:22:11.350653 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" event={"ID":"8ae706ea-d078-41e6-86b2-7dc023d77808","Type":"ContainerDied","Data":"2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e"} Jan 30 00:22:11 crc kubenswrapper[5104]: I0130 00:22:11.350711 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed0a2409fcede58108a46cd7c87b94012bbbb4a480f61c744cc8dd328ec235e" Jan 30 00:22:11 crc kubenswrapper[5104]: I0130 00:22:11.350743 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.933660 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv"] Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934155 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="extract" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934167 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="extract" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934176 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="pull" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934182 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="pull" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934196 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="util" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934203 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="util" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.934290 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="8ae706ea-d078-41e6-86b2-7dc023d77808" containerName="extract" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.944030 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.946270 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:22:12 crc kubenswrapper[5104]: I0130 00:22:12.949579 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv"] Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.020675 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.020734 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.020773 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpw5\" (UniqueName: \"kubernetes.io/projected/bfae5940-0f71-4c0a-92bc-3296f59b008c-kube-api-access-kfpw5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.121560 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.121617 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.121665 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpw5\" (UniqueName: \"kubernetes.io/projected/bfae5940-0f71-4c0a-92bc-3296f59b008c-kube-api-access-kfpw5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.122373 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.122577 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfae5940-0f71-4c0a-92bc-3296f59b008c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.139046 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpw5\" (UniqueName: \"kubernetes.io/projected/bfae5940-0f71-4c0a-92bc-3296f59b008c-kube-api-access-kfpw5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv\" (UID: \"bfae5940-0f71-4c0a-92bc-3296f59b008c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.314647 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.597243 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv"] Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.933298 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx"] Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.942509 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:13 crc kubenswrapper[5104]: I0130 00:22:13.944059 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx"] Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.032319 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.032441 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.032510 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvt25\" (UniqueName: \"kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.134667 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.134833 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.134975 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvt25\" (UniqueName: \"kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.135283 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.135554 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.171134 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvt25\" (UniqueName: \"kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.272630 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.370963 5104 generic.go:358] "Generic (PLEG): container finished" podID="bfae5940-0f71-4c0a-92bc-3296f59b008c" containerID="ba8073b22ee475b4eaf22797a22ffb4e582bdf8ae696e7fc66913f966eb870f2" exitCode=0 Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.371072 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" event={"ID":"bfae5940-0f71-4c0a-92bc-3296f59b008c","Type":"ContainerDied","Data":"ba8073b22ee475b4eaf22797a22ffb4e582bdf8ae696e7fc66913f966eb870f2"} Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.371101 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" event={"ID":"bfae5940-0f71-4c0a-92bc-3296f59b008c","Type":"ContainerStarted","Data":"d49897647c5cc55c8d158cde2865fa28e2bba69d6fa332de94ace7714d6527ee"} Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.539962 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx"] Jan 30 00:22:14 crc kubenswrapper[5104]: W0130 00:22:14.544553 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025cc01_0b4c_401d_bdec_5fe14e497982.slice/crio-8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b WatchSource:0}: Error finding container 8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b: Status 404 returned error can't find the container with id 8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b Jan 30 00:22:14 crc kubenswrapper[5104]: E0130 00:22:14.608607 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:22:14 crc kubenswrapper[5104]: E0130 00:22:14.608776 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:22:14 crc kubenswrapper[5104]: E0130 00:22:14.609921 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.950400 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.950463 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.950509 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.951104 5104 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6"} pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:22:14 crc kubenswrapper[5104]: I0130 00:22:14.951156 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" containerID="cri-o://d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6" gracePeriod=600 Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.378784 5104 generic.go:358] "Generic (PLEG): container finished" podID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerID="0a7d360fa9e92c7c31121e622f5eaf34ccac6bb944cc54f3cc1c7467e0393271" exitCode=0 Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.378905 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" event={"ID":"3025cc01-0b4c-401d-bdec-5fe14e497982","Type":"ContainerDied","Data":"0a7d360fa9e92c7c31121e622f5eaf34ccac6bb944cc54f3cc1c7467e0393271"} Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.379220 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" event={"ID":"3025cc01-0b4c-401d-bdec-5fe14e497982","Type":"ContainerStarted","Data":"8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b"} Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.387889 5104 generic.go:358] "Generic (PLEG): container finished" podID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerID="d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6" exitCode=0 Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.387971 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerDied","Data":"d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6"} Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.388026 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998"} Jan 30 00:22:15 crc kubenswrapper[5104]: I0130 00:22:15.388045 5104 scope.go:117] "RemoveContainer" containerID="592be4ef21e7b38e7e47f25a331744fdeaee7be766fc0073ca4589c272651c5a" Jan 30 00:22:15 crc kubenswrapper[5104]: E0130 00:22:15.390508 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:22:18 crc kubenswrapper[5104]: I0130 00:22:18.420288 5104 generic.go:358] "Generic (PLEG): container finished" podID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerID="36ed46db4e97b1dbc62aa967e7910abe784282ddaf02e1d6c940c7bd34d10b55" exitCode=0 Jan 30 00:22:18 crc kubenswrapper[5104]: I0130 00:22:18.420369 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" event={"ID":"3025cc01-0b4c-401d-bdec-5fe14e497982","Type":"ContainerDied","Data":"36ed46db4e97b1dbc62aa967e7910abe784282ddaf02e1d6c940c7bd34d10b55"} Jan 30 00:22:19 crc kubenswrapper[5104]: E0130 00:22:19.204902 5104 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3025cc01_0b4c_401d_bdec_5fe14e497982.slice/crio-conmon-9e08d623f2b1270f7fb18b81c9b5dc4034942aa67809cbc3413d1173da504978.scope\": RecentStats: unable to find data in memory cache]" Jan 30 00:22:19 crc kubenswrapper[5104]: I0130 00:22:19.427374 5104 generic.go:358] "Generic (PLEG): container finished" podID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerID="9e08d623f2b1270f7fb18b81c9b5dc4034942aa67809cbc3413d1173da504978" exitCode=0 Jan 30 00:22:19 crc kubenswrapper[5104]: I0130 00:22:19.427771 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" event={"ID":"3025cc01-0b4c-401d-bdec-5fe14e497982","Type":"ContainerDied","Data":"9e08d623f2b1270f7fb18b81c9b5dc4034942aa67809cbc3413d1173da504978"} Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.619092 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.651494 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.651625 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.655266 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-npxmb\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.655712 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.656440 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.749925 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.774810 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775338 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="util" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775356 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="util" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775372 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="extract" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775378 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="extract" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775388 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="pull" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775394 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="pull" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.775494 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="3025cc01-0b4c-401d-bdec-5fe14e497982" containerName="extract" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.778436 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.780161 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.780275 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-gxxsc\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.791718 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.801076 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.811120 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.819703 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.824029 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.824088 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.824120 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xkbc\" (UniqueName: \"kubernetes.io/projected/e4a69f62-1737-47f0-9ad8-19f3eca7ea5a-kube-api-access-5xkbc\") pod \"obo-prometheus-operator-9bc85b4bf-lbsz6\" (UID: \"e4a69f62-1737-47f0-9ad8-19f3eca7ea5a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.924935 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle\") pod \"3025cc01-0b4c-401d-bdec-5fe14e497982\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.925104 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util\") pod \"3025cc01-0b4c-401d-bdec-5fe14e497982\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.925132 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvt25\" (UniqueName: \"kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25\") pod \"3025cc01-0b4c-401d-bdec-5fe14e497982\" (UID: \"3025cc01-0b4c-401d-bdec-5fe14e497982\") " Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.925635 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.925756 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.925900 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xkbc\" (UniqueName: \"kubernetes.io/projected/e4a69f62-1737-47f0-9ad8-19f3eca7ea5a-kube-api-access-5xkbc\") pod \"obo-prometheus-operator-9bc85b4bf-lbsz6\" (UID: \"e4a69f62-1737-47f0-9ad8-19f3eca7ea5a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.926000 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle" (OuterVolumeSpecName: "bundle") pod "3025cc01-0b4c-401d-bdec-5fe14e497982" (UID: "3025cc01-0b4c-401d-bdec-5fe14e497982"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.926527 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.926642 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.926767 5104 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.931800 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.932463 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25" (OuterVolumeSpecName: "kube-api-access-qvt25") pod "3025cc01-0b4c-401d-bdec-5fe14e497982" (UID: "3025cc01-0b4c-401d-bdec-5fe14e497982"). InnerVolumeSpecName "kube-api-access-qvt25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.939527 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util" (OuterVolumeSpecName: "util") pod "3025cc01-0b4c-401d-bdec-5fe14e497982" (UID: "3025cc01-0b4c-401d-bdec-5fe14e497982"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.945789 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61a1034e-23f3-433b-9f89-3887202ac67b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn\" (UID: \"61a1034e-23f3-433b-9f89-3887202ac67b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.951811 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xkbc\" (UniqueName: \"kubernetes.io/projected/e4a69f62-1737-47f0-9ad8-19f3eca7ea5a-kube-api-access-5xkbc\") pod \"obo-prometheus-operator-9bc85b4bf-lbsz6\" (UID: \"e4a69f62-1737-47f0-9ad8-19f3eca7ea5a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.976602 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-492rh"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.983230 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.987471 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-492rh"] Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.987653 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 30 00:22:20 crc kubenswrapper[5104]: I0130 00:22:20.987672 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-fld7w\"" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.000264 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.027456 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.027822 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.028003 5104 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3025cc01-0b4c-401d-bdec-5fe14e497982-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.028023 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvt25\" (UniqueName: \"kubernetes.io/projected/3025cc01-0b4c-401d-bdec-5fe14e497982-kube-api-access-qvt25\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.031390 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.039080 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4ee410-e207-40f9-b067-488460ca04ef-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm\" (UID: \"fc4ee410-e207-40f9-b067-488460ca04ef\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.079467 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kt5v4"] Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.092646 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.098146 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kt5v4"] Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.098309 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.109427 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-2rkjj\"" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.129356 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f6e653f-1b86-4e85-82ea-bd5e8962100a-observability-operator-tls\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.129426 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86nq\" (UniqueName: \"kubernetes.io/projected/0f6e653f-1b86-4e85-82ea-bd5e8962100a-kube-api-access-r86nq\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.134804 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.232531 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69t5q\" (UniqueName: \"kubernetes.io/projected/f8751e81-7dbc-4b35-bf44-371140e56858-kube-api-access-69t5q\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.232615 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8751e81-7dbc-4b35-bf44-371140e56858-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.232712 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f6e653f-1b86-4e85-82ea-bd5e8962100a-observability-operator-tls\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.232828 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r86nq\" (UniqueName: \"kubernetes.io/projected/0f6e653f-1b86-4e85-82ea-bd5e8962100a-kube-api-access-r86nq\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.249278 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f6e653f-1b86-4e85-82ea-bd5e8962100a-observability-operator-tls\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.263998 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86nq\" (UniqueName: \"kubernetes.io/projected/0f6e653f-1b86-4e85-82ea-bd5e8962100a-kube-api-access-r86nq\") pod \"observability-operator-85c68dddb-492rh\" (UID: \"0f6e653f-1b86-4e85-82ea-bd5e8962100a\") " pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.277601 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6"] Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.303479 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.333835 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-69t5q\" (UniqueName: \"kubernetes.io/projected/f8751e81-7dbc-4b35-bf44-371140e56858-kube-api-access-69t5q\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.333912 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8751e81-7dbc-4b35-bf44-371140e56858-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.334843 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8751e81-7dbc-4b35-bf44-371140e56858-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.353301 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-69t5q\" (UniqueName: \"kubernetes.io/projected/f8751e81-7dbc-4b35-bf44-371140e56858-kube-api-access-69t5q\") pod \"perses-operator-669c9f96b5-kt5v4\" (UID: \"f8751e81-7dbc-4b35-bf44-371140e56858\") " pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.370693 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn"] Jan 30 00:22:21 crc kubenswrapper[5104]: W0130 00:22:21.389412 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a1034e_23f3_433b_9f89_3887202ac67b.slice/crio-aeb25988b3fda2c40cd8dadd2d9c25e9577838ade336c7e27b301645dd2680a4 WatchSource:0}: Error finding container aeb25988b3fda2c40cd8dadd2d9c25e9577838ade336c7e27b301645dd2680a4: Status 404 returned error can't find the container with id aeb25988b3fda2c40cd8dadd2d9c25e9577838ade336c7e27b301645dd2680a4 Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.414747 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.478582 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" event={"ID":"e4a69f62-1737-47f0-9ad8-19f3eca7ea5a","Type":"ContainerStarted","Data":"a00c07456aeae1d093df2d38ff85ba2cb48788fb8bbc59a6c2a4dd1526ff35a6"} Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.505190 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" event={"ID":"61a1034e-23f3-433b-9f89-3887202ac67b","Type":"ContainerStarted","Data":"aeb25988b3fda2c40cd8dadd2d9c25e9577838ade336c7e27b301645dd2680a4"} Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.538141 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" event={"ID":"3025cc01-0b4c-401d-bdec-5fe14e497982","Type":"ContainerDied","Data":"8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b"} Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.538179 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f60aaca08bd733bb86d1fe7319d09888d342d466b30b23d5d2dd1cbb327764b" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.538288 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx" Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.612158 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-492rh"] Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.642195 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm"] Jan 30 00:22:21 crc kubenswrapper[5104]: I0130 00:22:21.938426 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kt5v4"] Jan 30 00:22:21 crc kubenswrapper[5104]: W0130 00:22:21.943459 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8751e81_7dbc_4b35_bf44_371140e56858.slice/crio-dd42c879635a48279f92075d4a9ba318cfe03b358f2436acdea1b320ebaa3fc2 WatchSource:0}: Error finding container dd42c879635a48279f92075d4a9ba318cfe03b358f2436acdea1b320ebaa3fc2: Status 404 returned error can't find the container with id dd42c879635a48279f92075d4a9ba318cfe03b358f2436acdea1b320ebaa3fc2 Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.308885 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz"] Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.314830 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.316825 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz"] Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.349836 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm9b\" (UniqueName: \"kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.349906 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.349981 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.451444 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbm9b\" (UniqueName: \"kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.451508 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.451605 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.452159 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.452767 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.478526 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbm9b\" (UniqueName: \"kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.552027 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" event={"ID":"f8751e81-7dbc-4b35-bf44-371140e56858","Type":"ContainerStarted","Data":"dd42c879635a48279f92075d4a9ba318cfe03b358f2436acdea1b320ebaa3fc2"} Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.553659 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-492rh" event={"ID":"0f6e653f-1b86-4e85-82ea-bd5e8962100a","Type":"ContainerStarted","Data":"5916b09ccf83c10e0c6a4ae53683f00a3ad1f9bf3ff95529fcad81f5c8cc1414"} Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.558841 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" event={"ID":"fc4ee410-e207-40f9-b067-488460ca04ef","Type":"ContainerStarted","Data":"019a9c465e856f26b2bb3f26271bdfc9e4be3bf11a09a226abcfce9cb8ba9742"} Jan 30 00:22:22 crc kubenswrapper[5104]: I0130 00:22:22.687200 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:23 crc kubenswrapper[5104]: I0130 00:22:23.014534 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz"] Jan 30 00:22:23 crc kubenswrapper[5104]: W0130 00:22:23.074992 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c40fe6_90cc_4975_8d16_769c0291a313.slice/crio-dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd WatchSource:0}: Error finding container dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd: Status 404 returned error can't find the container with id dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd Jan 30 00:22:23 crc kubenswrapper[5104]: I0130 00:22:23.580951 5104 generic.go:358] "Generic (PLEG): container finished" podID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerID="9ba65dd63b462056b2629f1b716543fb314bac1641d230a58715f58f7b856fec" exitCode=0 Jan 30 00:22:23 crc kubenswrapper[5104]: I0130 00:22:23.581366 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" event={"ID":"e4c40fe6-90cc-4975-8d16-769c0291a313","Type":"ContainerDied","Data":"9ba65dd63b462056b2629f1b716543fb314bac1641d230a58715f58f7b856fec"} Jan 30 00:22:23 crc kubenswrapper[5104]: I0130 00:22:23.581391 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" event={"ID":"e4c40fe6-90cc-4975-8d16-769c0291a313","Type":"ContainerStarted","Data":"dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd"} Jan 30 00:22:27 crc kubenswrapper[5104]: E0130 00:22:27.770277 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:22:27 crc kubenswrapper[5104]: E0130 00:22:27.770491 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:22:27 crc kubenswrapper[5104]: E0130 00:22:27.771742 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.653938 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" event={"ID":"f8751e81-7dbc-4b35-bf44-371140e56858","Type":"ContainerStarted","Data":"f1d55d80000ed1489a00250aa88344a425d8434840e15ef5aa38ab3083c61068"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.655412 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.657394 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" event={"ID":"e4a69f62-1737-47f0-9ad8-19f3eca7ea5a","Type":"ContainerStarted","Data":"ddfbdb588fdaac027971b23ef30cc150d2e034e3816f3e2ddc87a926af103091"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.659347 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-492rh" event={"ID":"0f6e653f-1b86-4e85-82ea-bd5e8962100a","Type":"ContainerStarted","Data":"840c433259a5616daec2a4f5a0197b48042fb23c7a06e5c8f33618deff08be92"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.659701 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.661332 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" event={"ID":"fc4ee410-e207-40f9-b067-488460ca04ef","Type":"ContainerStarted","Data":"10649c1a69b776353f382d285798636d5c004aca2965bb818fe30ef663de84b7"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.662691 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-492rh" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.663356 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" event={"ID":"61a1034e-23f3-433b-9f89-3887202ac67b","Type":"ContainerStarted","Data":"a2826a1f9ade8f83d0f29efe133d3ea371d82abda0dcaf0c28da7f58665d2fb4"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.665461 5104 generic.go:358] "Generic (PLEG): container finished" podID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerID="683c9c60f9993135741f3ab21ef32d494b890a62942c27a6f5ffafff5894dc7a" exitCode=0 Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.665512 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" event={"ID":"e4c40fe6-90cc-4975-8d16-769c0291a313","Type":"ContainerDied","Data":"683c9c60f9993135741f3ab21ef32d494b890a62942c27a6f5ffafff5894dc7a"} Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.676105 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" podStartSLOduration=1.951025604 podStartE2EDuration="12.676081671s" podCreationTimestamp="2026-01-30 00:22:21 +0000 UTC" firstStartedPulling="2026-01-30 00:22:21.946406381 +0000 UTC m=+722.678745610" lastFinishedPulling="2026-01-30 00:22:32.671462438 +0000 UTC m=+733.403801677" observedRunningTime="2026-01-30 00:22:33.670678975 +0000 UTC m=+734.403018194" watchObservedRunningTime="2026-01-30 00:22:33.676081671 +0000 UTC m=+734.408420890" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.690773 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-lbsz6" podStartSLOduration=2.369241083 podStartE2EDuration="13.690755637s" podCreationTimestamp="2026-01-30 00:22:20 +0000 UTC" firstStartedPulling="2026-01-30 00:22:21.3111689 +0000 UTC m=+722.043508129" lastFinishedPulling="2026-01-30 00:22:32.632683464 +0000 UTC m=+733.365022683" observedRunningTime="2026-01-30 00:22:33.687608672 +0000 UTC m=+734.419947901" watchObservedRunningTime="2026-01-30 00:22:33.690755637 +0000 UTC m=+734.423094866" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.739868 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm" podStartSLOduration=2.693397895 podStartE2EDuration="13.739834008s" podCreationTimestamp="2026-01-30 00:22:20 +0000 UTC" firstStartedPulling="2026-01-30 00:22:21.659631767 +0000 UTC m=+722.391970986" lastFinishedPulling="2026-01-30 00:22:32.70606788 +0000 UTC m=+733.438407099" observedRunningTime="2026-01-30 00:22:33.712196194 +0000 UTC m=+734.444535413" watchObservedRunningTime="2026-01-30 00:22:33.739834008 +0000 UTC m=+734.472173227" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.742212 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn" podStartSLOduration=2.489431859 podStartE2EDuration="13.742204962s" podCreationTimestamp="2026-01-30 00:22:20 +0000 UTC" firstStartedPulling="2026-01-30 00:22:21.390972859 +0000 UTC m=+722.123312078" lastFinishedPulling="2026-01-30 00:22:32.643745972 +0000 UTC m=+733.376085181" observedRunningTime="2026-01-30 00:22:33.736711964 +0000 UTC m=+734.469051183" watchObservedRunningTime="2026-01-30 00:22:33.742204962 +0000 UTC m=+734.474544191" Jan 30 00:22:33 crc kubenswrapper[5104]: I0130 00:22:33.797015 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-492rh" podStartSLOduration=2.698244896 podStartE2EDuration="13.796996049s" podCreationTimestamp="2026-01-30 00:22:20 +0000 UTC" firstStartedPulling="2026-01-30 00:22:21.623115303 +0000 UTC m=+722.355454522" lastFinishedPulling="2026-01-30 00:22:32.721866456 +0000 UTC m=+733.454205675" observedRunningTime="2026-01-30 00:22:33.773711311 +0000 UTC m=+734.506050530" watchObservedRunningTime="2026-01-30 00:22:33.796996049 +0000 UTC m=+734.529335268" Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.672474 5104 generic.go:358] "Generic (PLEG): container finished" podID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerID="6f946ca6bf11f801a5f6440070fca73d318fdf61841d2aebbbb69699ac4a78a1" exitCode=0 Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.673544 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" event={"ID":"e4c40fe6-90cc-4975-8d16-769c0291a313","Type":"ContainerDied","Data":"6f946ca6bf11f801a5f6440070fca73d318fdf61841d2aebbbb69699ac4a78a1"} Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.872221 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.878638 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.893252 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.930423 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.930487 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2nr\" (UniqueName: \"kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:34 crc kubenswrapper[5104]: I0130 00:22:34.930534 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.031644 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.031712 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2nr\" (UniqueName: \"kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.031758 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.032202 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.032422 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.052393 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2nr\" (UniqueName: \"kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr\") pod \"redhat-operators-l2gg7\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.199358 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.427334 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.679794 5104 generic.go:358] "Generic (PLEG): container finished" podID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerID="76de996aebc0e32a1b1bd087a5b0703bfca7c9567b80bba18bfbf1d2aaec332f" exitCode=0 Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.679872 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerDied","Data":"76de996aebc0e32a1b1bd087a5b0703bfca7c9567b80bba18bfbf1d2aaec332f"} Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.680242 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerStarted","Data":"06a41e98231adc7930bf2a8e73c131a4a87de5f791858bc772fbd5dcde137083"} Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.881731 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.943136 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util\") pod \"e4c40fe6-90cc-4975-8d16-769c0291a313\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.943280 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbm9b\" (UniqueName: \"kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b\") pod \"e4c40fe6-90cc-4975-8d16-769c0291a313\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.943387 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle\") pod \"e4c40fe6-90cc-4975-8d16-769c0291a313\" (UID: \"e4c40fe6-90cc-4975-8d16-769c0291a313\") " Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.944246 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle" (OuterVolumeSpecName: "bundle") pod "e4c40fe6-90cc-4975-8d16-769c0291a313" (UID: "e4c40fe6-90cc-4975-8d16-769c0291a313"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.953060 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b" (OuterVolumeSpecName: "kube-api-access-jbm9b") pod "e4c40fe6-90cc-4975-8d16-769c0291a313" (UID: "e4c40fe6-90cc-4975-8d16-769c0291a313"). InnerVolumeSpecName "kube-api-access-jbm9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:35 crc kubenswrapper[5104]: I0130 00:22:35.959159 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util" (OuterVolumeSpecName: "util") pod "e4c40fe6-90cc-4975-8d16-769c0291a313" (UID: "e4c40fe6-90cc-4975-8d16-769c0291a313"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.044512 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbm9b\" (UniqueName: \"kubernetes.io/projected/e4c40fe6-90cc-4975-8d16-769c0291a313-kube-api-access-jbm9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.044563 5104 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.044572 5104 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4c40fe6-90cc-4975-8d16-769c0291a313-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.686776 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" event={"ID":"e4c40fe6-90cc-4975-8d16-769c0291a313","Type":"ContainerDied","Data":"dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd"} Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.687150 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd48702612b6caaad832b8e41fb6658005ce2841fe712682637ed847b9b008cd" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.687247 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz" Jan 30 00:22:36 crc kubenswrapper[5104]: I0130 00:22:36.690884 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerStarted","Data":"5306686aa1cf6788f8f6c2bb099f5cb73de5173d8de93b6ed57e6ea05c6f3c6e"} Jan 30 00:22:37 crc kubenswrapper[5104]: I0130 00:22:37.696997 5104 generic.go:358] "Generic (PLEG): container finished" podID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerID="5306686aa1cf6788f8f6c2bb099f5cb73de5173d8de93b6ed57e6ea05c6f3c6e" exitCode=0 Jan 30 00:22:37 crc kubenswrapper[5104]: I0130 00:22:37.697163 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerDied","Data":"5306686aa1cf6788f8f6c2bb099f5cb73de5173d8de93b6ed57e6ea05c6f3c6e"} Jan 30 00:22:38 crc kubenswrapper[5104]: I0130 00:22:38.704932 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerStarted","Data":"731b0963aaba991a102fc0003dab997dc5c9dc70026309f4d3e922c2d0057b69"} Jan 30 00:22:38 crc kubenswrapper[5104]: I0130 00:22:38.729569 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2gg7" podStartSLOduration=3.951508443 podStartE2EDuration="4.729548333s" podCreationTimestamp="2026-01-30 00:22:34 +0000 UTC" firstStartedPulling="2026-01-30 00:22:35.680572359 +0000 UTC m=+736.412911578" lastFinishedPulling="2026-01-30 00:22:36.458612249 +0000 UTC m=+737.190951468" observedRunningTime="2026-01-30 00:22:38.723991014 +0000 UTC m=+739.456330243" watchObservedRunningTime="2026-01-30 00:22:38.729548333 +0000 UTC m=+739.461887552" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.209985 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq"] Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210752 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="extract" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210764 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="extract" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210777 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="pull" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210782 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="pull" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210796 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="util" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210801 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="util" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.210940 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4c40fe6-90cc-4975-8d16-769c0291a313" containerName="extract" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.221496 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.226891 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.227304 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.227640 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-9rhvc\"" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.230379 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq"] Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.325069 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rngs\" (UniqueName: \"kubernetes.io/projected/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-kube-api-access-8rngs\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.325133 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.426626 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rngs\" (UniqueName: \"kubernetes.io/projected/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-kube-api-access-8rngs\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.426679 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.427153 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.460188 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rngs\" (UniqueName: \"kubernetes.io/projected/eb3d9567-2dcf-4f57-bc76-d373c694b5f3-kube-api-access-8rngs\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4tgsq\" (UID: \"eb3d9567-2dcf-4f57-bc76-d373c694b5f3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: E0130 00:22:41.527492 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.539215 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" Jan 30 00:22:41 crc kubenswrapper[5104]: I0130 00:22:41.828273 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq"] Jan 30 00:22:41 crc kubenswrapper[5104]: W0130 00:22:41.832131 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb3d9567_2dcf_4f57_bc76_d373c694b5f3.slice/crio-93337239dd99cef99552005514a5cfd249a4a7c866c1e909e7a47199fb1eb44f WatchSource:0}: Error finding container 93337239dd99cef99552005514a5cfd249a4a7c866c1e909e7a47199fb1eb44f: Status 404 returned error can't find the container with id 93337239dd99cef99552005514a5cfd249a4a7c866c1e909e7a47199fb1eb44f Jan 30 00:22:42 crc kubenswrapper[5104]: I0130 00:22:42.728777 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" event={"ID":"eb3d9567-2dcf-4f57-bc76-d373c694b5f3","Type":"ContainerStarted","Data":"93337239dd99cef99552005514a5cfd249a4a7c866c1e909e7a47199fb1eb44f"} Jan 30 00:22:45 crc kubenswrapper[5104]: I0130 00:22:45.200078 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:45 crc kubenswrapper[5104]: I0130 00:22:45.200446 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:45 crc kubenswrapper[5104]: I0130 00:22:45.237041 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:45 crc kubenswrapper[5104]: I0130 00:22:45.683210 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-kt5v4" Jan 30 00:22:45 crc kubenswrapper[5104]: I0130 00:22:45.783363 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:47 crc kubenswrapper[5104]: I0130 00:22:47.762355 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" event={"ID":"eb3d9567-2dcf-4f57-bc76-d373c694b5f3","Type":"ContainerStarted","Data":"052aa8677eaf349066d076fe3e71899fc605f1dedfb1b1d915ae20670a601c3b"} Jan 30 00:22:47 crc kubenswrapper[5104]: I0130 00:22:47.807560 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4tgsq" podStartSLOduration=1.925380793 podStartE2EDuration="6.807543751s" podCreationTimestamp="2026-01-30 00:22:41 +0000 UTC" firstStartedPulling="2026-01-30 00:22:41.837523498 +0000 UTC m=+742.569862717" lastFinishedPulling="2026-01-30 00:22:46.719686456 +0000 UTC m=+747.452025675" observedRunningTime="2026-01-30 00:22:47.803566163 +0000 UTC m=+748.535905402" watchObservedRunningTime="2026-01-30 00:22:47.807543751 +0000 UTC m=+748.539882970" Jan 30 00:22:48 crc kubenswrapper[5104]: I0130 00:22:48.464188 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:48 crc kubenswrapper[5104]: I0130 00:22:48.464499 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2gg7" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="registry-server" containerID="cri-o://731b0963aaba991a102fc0003dab997dc5c9dc70026309f4d3e922c2d0057b69" gracePeriod=2 Jan 30 00:22:48 crc kubenswrapper[5104]: I0130 00:22:48.778180 5104 generic.go:358] "Generic (PLEG): container finished" podID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerID="731b0963aaba991a102fc0003dab997dc5c9dc70026309f4d3e922c2d0057b69" exitCode=0 Jan 30 00:22:48 crc kubenswrapper[5104]: I0130 00:22:48.778779 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerDied","Data":"731b0963aaba991a102fc0003dab997dc5c9dc70026309f4d3e922c2d0057b69"} Jan 30 00:22:48 crc kubenswrapper[5104]: I0130 00:22:48.995443 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.016279 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg2nr\" (UniqueName: \"kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr\") pod \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.016377 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities\") pod \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.016476 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content\") pod \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\" (UID: \"18c679d9-4746-4a28-928e-3ea0d1dbfa89\") " Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.017442 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities" (OuterVolumeSpecName: "utilities") pod "18c679d9-4746-4a28-928e-3ea0d1dbfa89" (UID: "18c679d9-4746-4a28-928e-3ea0d1dbfa89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.024026 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr" (OuterVolumeSpecName: "kube-api-access-kg2nr") pod "18c679d9-4746-4a28-928e-3ea0d1dbfa89" (UID: "18c679d9-4746-4a28-928e-3ea0d1dbfa89"). InnerVolumeSpecName "kube-api-access-kg2nr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.117890 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.117919 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kg2nr\" (UniqueName: \"kubernetes.io/projected/18c679d9-4746-4a28-928e-3ea0d1dbfa89-kube-api-access-kg2nr\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.123280 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18c679d9-4746-4a28-928e-3ea0d1dbfa89" (UID: "18c679d9-4746-4a28-928e-3ea0d1dbfa89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.218672 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c679d9-4746-4a28-928e-3ea0d1dbfa89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.785764 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2gg7" event={"ID":"18c679d9-4746-4a28-928e-3ea0d1dbfa89","Type":"ContainerDied","Data":"06a41e98231adc7930bf2a8e73c131a4a87de5f791858bc772fbd5dcde137083"} Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.786052 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2gg7" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.786081 5104 scope.go:117] "RemoveContainer" containerID="731b0963aaba991a102fc0003dab997dc5c9dc70026309f4d3e922c2d0057b69" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.804122 5104 scope.go:117] "RemoveContainer" containerID="5306686aa1cf6788f8f6c2bb099f5cb73de5173d8de93b6ed57e6ea05c6f3c6e" Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.812245 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.815650 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2gg7"] Jan 30 00:22:49 crc kubenswrapper[5104]: I0130 00:22:49.825602 5104 scope.go:117] "RemoveContainer" containerID="76de996aebc0e32a1b1bd087a5b0703bfca7c9567b80bba18bfbf1d2aaec332f" Jan 30 00:22:50 crc kubenswrapper[5104]: I0130 00:22:50.531685 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" path="/var/lib/kubelet/pods/18c679d9-4746-4a28-928e-3ea0d1dbfa89/volumes" Jan 30 00:22:52 crc kubenswrapper[5104]: E0130 00:22:52.766788 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:22:52 crc kubenswrapper[5104]: E0130 00:22:52.767321 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:22:52 crc kubenswrapper[5104]: E0130 00:22:52.768564 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.277802 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-vdrst"] Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278376 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="registry-server" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278390 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="registry-server" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278403 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="extract-utilities" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278409 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="extract-utilities" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278430 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="extract-content" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278437 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="extract-content" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.278539 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="18c679d9-4746-4a28-928e-3ea0d1dbfa89" containerName="registry-server" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.350025 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-vdrst"] Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.350196 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.353661 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.354742 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-ft842\"" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.356079 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.397247 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtlbf\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-kube-api-access-xtlbf\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.397284 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.498088 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtlbf\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-kube-api-access-xtlbf\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.498204 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.522347 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtlbf\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-kube-api-access-xtlbf\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.523040 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4a8964-ca7d-4307-9369-e80c999b9155-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-vdrst\" (UID: \"0e4a8964-ca7d-4307-9369-e80c999b9155\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.681267 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" Jan 30 00:22:54 crc kubenswrapper[5104]: I0130 00:22:54.897350 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-vdrst"] Jan 30 00:22:55 crc kubenswrapper[5104]: I0130 00:22:55.831266 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" event={"ID":"0e4a8964-ca7d-4307-9369-e80c999b9155","Type":"ContainerStarted","Data":"1422a53a232944a16e2f15d96813a244f39fd87d9a77ec929584fffbeeba1349"} Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.551764 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-cjc8l"] Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.557755 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.562028 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-b6xd9\"" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.567092 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-cjc8l"] Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.636115 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbjjz\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-kube-api-access-zbjjz\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.636555 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.738388 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbjjz\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-kube-api-access-zbjjz\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.738513 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.769812 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbjjz\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-kube-api-access-zbjjz\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.773616 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1dd233a-6ee2-4b44-af18-f22a902c2cd5-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-cjc8l\" (UID: \"f1dd233a-6ee2-4b44-af18-f22a902c2cd5\") " pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:56 crc kubenswrapper[5104]: I0130 00:22:56.884079 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:57 crc kubenswrapper[5104]: I0130 00:22:57.079739 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-cjc8l"] Jan 30 00:22:57 crc kubenswrapper[5104]: W0130 00:22:57.089722 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1dd233a_6ee2_4b44_af18_f22a902c2cd5.slice/crio-47ab732100c8cdedd78fc41bdc79ddfa704a6355dd9f4b4c7a3a1dec2a616814 WatchSource:0}: Error finding container 47ab732100c8cdedd78fc41bdc79ddfa704a6355dd9f4b4c7a3a1dec2a616814: Status 404 returned error can't find the container with id 47ab732100c8cdedd78fc41bdc79ddfa704a6355dd9f4b4c7a3a1dec2a616814 Jan 30 00:22:57 crc kubenswrapper[5104]: I0130 00:22:57.854909 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" event={"ID":"f1dd233a-6ee2-4b44-af18-f22a902c2cd5","Type":"ContainerStarted","Data":"47ab732100c8cdedd78fc41bdc79ddfa704a6355dd9f4b4c7a3a1dec2a616814"} Jan 30 00:22:59 crc kubenswrapper[5104]: I0130 00:22:59.869414 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" event={"ID":"f1dd233a-6ee2-4b44-af18-f22a902c2cd5","Type":"ContainerStarted","Data":"cf37aa8d98fb96289f16d1ea0c83507694f670170d145842249618e97fae6786"} Jan 30 00:22:59 crc kubenswrapper[5104]: I0130 00:22:59.869779 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:22:59 crc kubenswrapper[5104]: I0130 00:22:59.872226 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" event={"ID":"0e4a8964-ca7d-4307-9369-e80c999b9155","Type":"ContainerStarted","Data":"53a9bc63496f67dbee28fa5c3069579849459975633216a21f843f8317617b1a"} Jan 30 00:22:59 crc kubenswrapper[5104]: I0130 00:22:59.891357 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" podStartSLOduration=2.2208966390000002 podStartE2EDuration="3.891342509s" podCreationTimestamp="2026-01-30 00:22:56 +0000 UTC" firstStartedPulling="2026-01-30 00:22:57.092620246 +0000 UTC m=+757.824959465" lastFinishedPulling="2026-01-30 00:22:58.763066106 +0000 UTC m=+759.495405335" observedRunningTime="2026-01-30 00:22:59.886918301 +0000 UTC m=+760.619257530" watchObservedRunningTime="2026-01-30 00:22:59.891342509 +0000 UTC m=+760.623681728" Jan 30 00:22:59 crc kubenswrapper[5104]: I0130 00:22:59.910690 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-vdrst" podStartSLOduration=2.057364477 podStartE2EDuration="5.91066707s" podCreationTimestamp="2026-01-30 00:22:54 +0000 UTC" firstStartedPulling="2026-01-30 00:22:54.913506204 +0000 UTC m=+755.645845433" lastFinishedPulling="2026-01-30 00:22:58.766808807 +0000 UTC m=+759.499148026" observedRunningTime="2026-01-30 00:22:59.909775196 +0000 UTC m=+760.642114455" watchObservedRunningTime="2026-01-30 00:22:59.91066707 +0000 UTC m=+760.643006299" Jan 30 00:23:05 crc kubenswrapper[5104]: I0130 00:23:05.892579 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-cjc8l" Jan 30 00:23:06 crc kubenswrapper[5104]: E0130 00:23:06.534241 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.748226 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-p7wlh"] Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.755184 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.757832 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-m2nwg\"" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.762038 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-p7wlh"] Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.841263 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-bound-sa-token\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.841592 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mxp\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-kube-api-access-s5mxp\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.942757 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-bound-sa-token\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.942806 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mxp\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-kube-api-access-s5mxp\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.964814 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-bound-sa-token\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:10 crc kubenswrapper[5104]: I0130 00:23:10.965111 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mxp\" (UniqueName: \"kubernetes.io/projected/9656e442-0122-4ac2-8615-36525fbb8519-kube-api-access-s5mxp\") pod \"cert-manager-759f64656b-p7wlh\" (UID: \"9656e442-0122-4ac2-8615-36525fbb8519\") " pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:11 crc kubenswrapper[5104]: I0130 00:23:11.124975 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-p7wlh" Jan 30 00:23:11 crc kubenswrapper[5104]: I0130 00:23:11.393592 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-p7wlh"] Jan 30 00:23:11 crc kubenswrapper[5104]: I0130 00:23:11.975481 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-p7wlh" event={"ID":"9656e442-0122-4ac2-8615-36525fbb8519","Type":"ContainerStarted","Data":"c45a7f725d42b8cb50e170ce1c86f984416751f59d9129a96241e7187ebfc2f7"} Jan 30 00:23:11 crc kubenswrapper[5104]: I0130 00:23:11.977567 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-p7wlh" event={"ID":"9656e442-0122-4ac2-8615-36525fbb8519","Type":"ContainerStarted","Data":"2933208a15fb8d9b7ae3d2b51e2dd4244e95539b30f49062ae3ffd33dd1fae0e"} Jan 30 00:23:20 crc kubenswrapper[5104]: E0130 00:23:20.542608 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:23:20 crc kubenswrapper[5104]: I0130 00:23:20.573728 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-p7wlh" podStartSLOduration=10.573708861 podStartE2EDuration="10.573708861s" podCreationTimestamp="2026-01-30 00:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:23:12.000711807 +0000 UTC m=+772.733051086" watchObservedRunningTime="2026-01-30 00:23:20.573708861 +0000 UTC m=+781.306048090" Jan 30 00:23:33 crc kubenswrapper[5104]: E0130 00:23:33.771126 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:23:33 crc kubenswrapper[5104]: E0130 00:23:33.772080 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:23:33 crc kubenswrapper[5104]: E0130 00:23:33.773388 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:23:48 crc kubenswrapper[5104]: E0130 00:23:48.529582 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:23:59 crc kubenswrapper[5104]: E0130 00:23:59.531216 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.145479 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495544-cmvpc"] Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.161422 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-cmvpc"] Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.161601 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.165928 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.166754 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.167408 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.273009 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psprh\" (UniqueName: \"kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh\") pod \"auto-csr-approver-29495544-cmvpc\" (UID: \"f623fd34-1c00-4bdd-8dfe-7750937fad34\") " pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.375162 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-psprh\" (UniqueName: \"kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh\") pod \"auto-csr-approver-29495544-cmvpc\" (UID: \"f623fd34-1c00-4bdd-8dfe-7750937fad34\") " pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.408799 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-psprh\" (UniqueName: \"kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh\") pod \"auto-csr-approver-29495544-cmvpc\" (UID: \"f623fd34-1c00-4bdd-8dfe-7750937fad34\") " pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.495751 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:00 crc kubenswrapper[5104]: I0130 00:24:00.933083 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-cmvpc"] Jan 30 00:24:01 crc kubenswrapper[5104]: I0130 00:24:01.355232 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" event={"ID":"f623fd34-1c00-4bdd-8dfe-7750937fad34","Type":"ContainerStarted","Data":"576efde69cc0a869e1e282fa9152241979671c164bfb9ad4bca070fdfbe7dbc0"} Jan 30 00:24:02 crc kubenswrapper[5104]: I0130 00:24:02.362290 5104 generic.go:358] "Generic (PLEG): container finished" podID="f623fd34-1c00-4bdd-8dfe-7750937fad34" containerID="00d35aced4bfdc93574e022fb435b909bcfe2d35d2b1a8b805e0ddeb01d1935f" exitCode=0 Jan 30 00:24:02 crc kubenswrapper[5104]: I0130 00:24:02.362536 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" event={"ID":"f623fd34-1c00-4bdd-8dfe-7750937fad34","Type":"ContainerDied","Data":"00d35aced4bfdc93574e022fb435b909bcfe2d35d2b1a8b805e0ddeb01d1935f"} Jan 30 00:24:03 crc kubenswrapper[5104]: I0130 00:24:03.676146 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:03 crc kubenswrapper[5104]: I0130 00:24:03.822727 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psprh\" (UniqueName: \"kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh\") pod \"f623fd34-1c00-4bdd-8dfe-7750937fad34\" (UID: \"f623fd34-1c00-4bdd-8dfe-7750937fad34\") " Jan 30 00:24:03 crc kubenswrapper[5104]: I0130 00:24:03.831048 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh" (OuterVolumeSpecName: "kube-api-access-psprh") pod "f623fd34-1c00-4bdd-8dfe-7750937fad34" (UID: "f623fd34-1c00-4bdd-8dfe-7750937fad34"). InnerVolumeSpecName "kube-api-access-psprh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:03 crc kubenswrapper[5104]: I0130 00:24:03.924552 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psprh\" (UniqueName: \"kubernetes.io/projected/f623fd34-1c00-4bdd-8dfe-7750937fad34-kube-api-access-psprh\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:04 crc kubenswrapper[5104]: I0130 00:24:04.385016 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" event={"ID":"f623fd34-1c00-4bdd-8dfe-7750937fad34","Type":"ContainerDied","Data":"576efde69cc0a869e1e282fa9152241979671c164bfb9ad4bca070fdfbe7dbc0"} Jan 30 00:24:04 crc kubenswrapper[5104]: I0130 00:24:04.385388 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576efde69cc0a869e1e282fa9152241979671c164bfb9ad4bca070fdfbe7dbc0" Jan 30 00:24:04 crc kubenswrapper[5104]: I0130 00:24:04.385127 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-cmvpc" Jan 30 00:24:04 crc kubenswrapper[5104]: I0130 00:24:04.757596 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-t48qk"] Jan 30 00:24:04 crc kubenswrapper[5104]: I0130 00:24:04.766155 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-t48qk"] Jan 30 00:24:06 crc kubenswrapper[5104]: I0130 00:24:06.532778 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b" path="/var/lib/kubelet/pods/9bc1e14b-08a9-46fd-a3b0-a1754fa1d35b/volumes" Jan 30 00:24:10 crc kubenswrapper[5104]: E0130 00:24:10.532981 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:24:21 crc kubenswrapper[5104]: I0130 00:24:21.281708 5104 scope.go:117] "RemoveContainer" containerID="27bac406680865d1d5c6ed7d5ce468c8a83db1088e19a1cac083290838eb5eba" Jan 30 00:24:23 crc kubenswrapper[5104]: E0130 00:24:23.528245 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:24:38 crc kubenswrapper[5104]: E0130 00:24:38.545224 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:24:44 crc kubenswrapper[5104]: I0130 00:24:44.949811 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:24:44 crc kubenswrapper[5104]: I0130 00:24:44.950611 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:24:49 crc kubenswrapper[5104]: E0130 00:24:49.529053 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:25:01 crc kubenswrapper[5104]: E0130 00:25:01.767615 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:01 crc kubenswrapper[5104]: E0130 00:25:01.768479 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:01 crc kubenswrapper[5104]: E0130 00:25:01.769790 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:25:14 crc kubenswrapper[5104]: I0130 00:25:14.949952 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:14 crc kubenswrapper[5104]: I0130 00:25:14.950757 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:25:15 crc kubenswrapper[5104]: E0130 00:25:15.530206 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:25:20 crc kubenswrapper[5104]: I0130 00:25:20.857743 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:25:20 crc kubenswrapper[5104]: I0130 00:25:20.860015 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:25:20 crc kubenswrapper[5104]: I0130 00:25:20.870354 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:25:20 crc kubenswrapper[5104]: I0130 00:25:20.871203 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:25:30 crc kubenswrapper[5104]: E0130 00:25:30.541286 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.360575 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wtf55/must-gather-95lj9"] Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.361391 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f623fd34-1c00-4bdd-8dfe-7750937fad34" containerName="oc" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.361416 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="f623fd34-1c00-4bdd-8dfe-7750937fad34" containerName="oc" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.361571 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="f623fd34-1c00-4bdd-8dfe-7750937fad34" containerName="oc" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.368017 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.368761 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wtf55/must-gather-95lj9"] Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.371219 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wtf55\"/\"default-dockercfg-4spr6\"" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.371537 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wtf55\"/\"openshift-service-ca.crt\"" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.371716 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wtf55\"/\"kube-root-ca.crt\"" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.486159 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.486234 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xskl\" (UniqueName: \"kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.587621 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.587722 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xskl\" (UniqueName: \"kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.588119 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.617800 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xskl\" (UniqueName: \"kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl\") pod \"must-gather-95lj9\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.696349 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:25:31 crc kubenswrapper[5104]: I0130 00:25:31.944361 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wtf55/must-gather-95lj9"] Jan 30 00:25:32 crc kubenswrapper[5104]: I0130 00:25:32.033011 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wtf55/must-gather-95lj9" event={"ID":"6e964415-d51f-4a72-b159-79664cfded67","Type":"ContainerStarted","Data":"fa4890a2de8cfaf798d5bbfaa1b2850eaae8fc1154b4be5091159d26470d5fbd"} Jan 30 00:25:38 crc kubenswrapper[5104]: I0130 00:25:38.085645 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wtf55/must-gather-95lj9" event={"ID":"6e964415-d51f-4a72-b159-79664cfded67","Type":"ContainerStarted","Data":"1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b"} Jan 30 00:25:38 crc kubenswrapper[5104]: I0130 00:25:38.086251 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wtf55/must-gather-95lj9" event={"ID":"6e964415-d51f-4a72-b159-79664cfded67","Type":"ContainerStarted","Data":"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092"} Jan 30 00:25:38 crc kubenswrapper[5104]: I0130 00:25:38.104954 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wtf55/must-gather-95lj9" podStartSLOduration=2.062808396 podStartE2EDuration="7.104940082s" podCreationTimestamp="2026-01-30 00:25:31 +0000 UTC" firstStartedPulling="2026-01-30 00:25:31.969952723 +0000 UTC m=+912.702291942" lastFinishedPulling="2026-01-30 00:25:37.012084409 +0000 UTC m=+917.744423628" observedRunningTime="2026-01-30 00:25:38.103153974 +0000 UTC m=+918.835493193" watchObservedRunningTime="2026-01-30 00:25:38.104940082 +0000 UTC m=+918.837279301" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.208115 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.405310 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.405474 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.496994 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bbb\" (UniqueName: \"kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.497101 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.497156 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.598927 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.599396 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.599597 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.599652 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bbb\" (UniqueName: \"kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.599921 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.628742 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bbb\" (UniqueName: \"kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb\") pod \"community-operators-vd6tt\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.721649 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:41 crc kubenswrapper[5104]: I0130 00:25:41.989257 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:42 crc kubenswrapper[5104]: I0130 00:25:42.110861 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerStarted","Data":"0cb59ba7a585694c275d9176bb04aa9022c03dffc4774de78464f88fe80f5051"} Jan 30 00:25:42 crc kubenswrapper[5104]: E0130 00:25:42.536165 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:25:43 crc kubenswrapper[5104]: I0130 00:25:43.120577 5104 generic.go:358] "Generic (PLEG): container finished" podID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerID="d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94" exitCode=0 Jan 30 00:25:43 crc kubenswrapper[5104]: I0130 00:25:43.120645 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerDied","Data":"d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94"} Jan 30 00:25:44 crc kubenswrapper[5104]: I0130 00:25:44.949290 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:44 crc kubenswrapper[5104]: I0130 00:25:44.949805 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:25:44 crc kubenswrapper[5104]: I0130 00:25:44.949861 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:25:44 crc kubenswrapper[5104]: I0130 00:25:44.950368 5104 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998"} pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:25:44 crc kubenswrapper[5104]: I0130 00:25:44.950417 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" containerID="cri-o://c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998" gracePeriod=600 Jan 30 00:25:45 crc kubenswrapper[5104]: I0130 00:25:45.138048 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerStarted","Data":"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c"} Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.155886 5104 generic.go:358] "Generic (PLEG): container finished" podID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerID="c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998" exitCode=0 Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.156075 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerDied","Data":"c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998"} Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.156375 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"1b7d1b8c348b48cd05f685aca263a03710f520dae93d7f497ea7c88e0035f94f"} Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.156407 5104 scope.go:117] "RemoveContainer" containerID="d754d2bbf2cca802aaf2079a592a35c77544128b415319cab69816ec60b29ff6" Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.161393 5104 generic.go:358] "Generic (PLEG): container finished" podID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerID="b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c" exitCode=0 Jan 30 00:25:46 crc kubenswrapper[5104]: I0130 00:25:46.161469 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerDied","Data":"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c"} Jan 30 00:25:47 crc kubenswrapper[5104]: I0130 00:25:47.172503 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerStarted","Data":"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf"} Jan 30 00:25:47 crc kubenswrapper[5104]: I0130 00:25:47.193507 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vd6tt" podStartSLOduration=4.657237072 podStartE2EDuration="6.193486812s" podCreationTimestamp="2026-01-30 00:25:41 +0000 UTC" firstStartedPulling="2026-01-30 00:25:43.121961093 +0000 UTC m=+923.854300342" lastFinishedPulling="2026-01-30 00:25:44.658210843 +0000 UTC m=+925.390550082" observedRunningTime="2026-01-30 00:25:47.186831373 +0000 UTC m=+927.919170612" watchObservedRunningTime="2026-01-30 00:25:47.193486812 +0000 UTC m=+927.925826051" Jan 30 00:25:51 crc kubenswrapper[5104]: I0130 00:25:51.722198 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:51 crc kubenswrapper[5104]: I0130 00:25:51.722755 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:51 crc kubenswrapper[5104]: I0130 00:25:51.756481 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:52 crc kubenswrapper[5104]: I0130 00:25:52.252371 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:52 crc kubenswrapper[5104]: I0130 00:25:52.290335 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.217772 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vd6tt" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="registry-server" containerID="cri-o://6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf" gracePeriod=2 Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.621892 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.635346 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5bbb\" (UniqueName: \"kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb\") pod \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.635425 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities\") pod \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.635572 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content\") pod \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\" (UID: \"7d2d5815-380e-482a-bd5c-b09e0b9267d1\") " Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.636820 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities" (OuterVolumeSpecName: "utilities") pod "7d2d5815-380e-482a-bd5c-b09e0b9267d1" (UID: "7d2d5815-380e-482a-bd5c-b09e0b9267d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.645104 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb" (OuterVolumeSpecName: "kube-api-access-b5bbb") pod "7d2d5815-380e-482a-bd5c-b09e0b9267d1" (UID: "7d2d5815-380e-482a-bd5c-b09e0b9267d1"). InnerVolumeSpecName "kube-api-access-b5bbb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.650494 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5bbb\" (UniqueName: \"kubernetes.io/projected/7d2d5815-380e-482a-bd5c-b09e0b9267d1-kube-api-access-b5bbb\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.650523 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.697774 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2d5815-380e-482a-bd5c-b09e0b9267d1" (UID: "7d2d5815-380e-482a-bd5c-b09e0b9267d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:54 crc kubenswrapper[5104]: I0130 00:25:54.751424 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2d5815-380e-482a-bd5c-b09e0b9267d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.248470 5104 generic.go:358] "Generic (PLEG): container finished" podID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerID="6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf" exitCode=0 Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.248533 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerDied","Data":"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf"} Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.248794 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vd6tt" event={"ID":"7d2d5815-380e-482a-bd5c-b09e0b9267d1","Type":"ContainerDied","Data":"0cb59ba7a585694c275d9176bb04aa9022c03dffc4774de78464f88fe80f5051"} Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.248815 5104 scope.go:117] "RemoveContainer" containerID="6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.248581 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vd6tt" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.264217 5104 scope.go:117] "RemoveContainer" containerID="b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.283644 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.291625 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vd6tt"] Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.292462 5104 scope.go:117] "RemoveContainer" containerID="d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.310456 5104 scope.go:117] "RemoveContainer" containerID="6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf" Jan 30 00:25:55 crc kubenswrapper[5104]: E0130 00:25:55.311233 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf\": container with ID starting with 6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf not found: ID does not exist" containerID="6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.311285 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf"} err="failed to get container status \"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf\": rpc error: code = NotFound desc = could not find container \"6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf\": container with ID starting with 6ed6f0dac3ecf45b5916d93aa33a96e11a0c19dbb0567a75f078b80847165bdf not found: ID does not exist" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.311315 5104 scope.go:117] "RemoveContainer" containerID="b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c" Jan 30 00:25:55 crc kubenswrapper[5104]: E0130 00:25:55.311658 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c\": container with ID starting with b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c not found: ID does not exist" containerID="b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.311771 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c"} err="failed to get container status \"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c\": rpc error: code = NotFound desc = could not find container \"b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c\": container with ID starting with b03b8e449952c9beacdecd460b32ded16ae74c34d9eff23dde374a2dd32c731c not found: ID does not exist" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.311883 5104 scope.go:117] "RemoveContainer" containerID="d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94" Jan 30 00:25:55 crc kubenswrapper[5104]: E0130 00:25:55.312261 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94\": container with ID starting with d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94 not found: ID does not exist" containerID="d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94" Jan 30 00:25:55 crc kubenswrapper[5104]: I0130 00:25:55.312362 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94"} err="failed to get container status \"d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94\": rpc error: code = NotFound desc = could not find container \"d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94\": container with ID starting with d2ba9e639f77339a81283366f62464ee6434eabc0279774887e2e5003f3e6b94 not found: ID does not exist" Jan 30 00:25:56 crc kubenswrapper[5104]: I0130 00:25:56.538161 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" path="/var/lib/kubelet/pods/7d2d5815-380e-482a-bd5c-b09e0b9267d1/volumes" Jan 30 00:25:57 crc kubenswrapper[5104]: E0130 00:25:57.527644 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.144404 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zcdmr"] Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146255 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="extract-utilities" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146299 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="extract-utilities" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146332 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="extract-content" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146349 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="extract-content" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146445 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146464 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.146721 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d2d5815-380e-482a-bd5c-b09e0b9267d1" containerName="registry-server" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.168136 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zcdmr"] Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.168363 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.173731 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.174194 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.174523 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.226906 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5v5\" (UniqueName: \"kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5\") pod \"auto-csr-approver-29495546-zcdmr\" (UID: \"9bdffee1-c1a4-4859-8c01-0e5559602fc9\") " pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.328569 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5v5\" (UniqueName: \"kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5\") pod \"auto-csr-approver-29495546-zcdmr\" (UID: \"9bdffee1-c1a4-4859-8c01-0e5559602fc9\") " pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.348503 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5v5\" (UniqueName: \"kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5\") pod \"auto-csr-approver-29495546-zcdmr\" (UID: \"9bdffee1-c1a4-4859-8c01-0e5559602fc9\") " pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.506072 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.981495 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-zcdmr"] Jan 30 00:26:00 crc kubenswrapper[5104]: I0130 00:26:00.990870 5104 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:26:01 crc kubenswrapper[5104]: I0130 00:26:01.282216 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" event={"ID":"9bdffee1-c1a4-4859-8c01-0e5559602fc9","Type":"ContainerStarted","Data":"68c5bd1f56fddc85e6ecd50c62c62f7c32aaad9dbc40c783261de26e91da01c5"} Jan 30 00:26:02 crc kubenswrapper[5104]: I0130 00:26:02.291442 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" event={"ID":"9bdffee1-c1a4-4859-8c01-0e5559602fc9","Type":"ContainerStarted","Data":"331f7ec32e1f42a5317444a2dcca8af87e7ce19481e50c3dc6057bbe4f598cc6"} Jan 30 00:26:02 crc kubenswrapper[5104]: I0130 00:26:02.307211 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" podStartSLOduration=1.338835172 podStartE2EDuration="2.307192909s" podCreationTimestamp="2026-01-30 00:26:00 +0000 UTC" firstStartedPulling="2026-01-30 00:26:00.991091566 +0000 UTC m=+941.723430785" lastFinishedPulling="2026-01-30 00:26:01.959449273 +0000 UTC m=+942.691788522" observedRunningTime="2026-01-30 00:26:02.30499315 +0000 UTC m=+943.037332409" watchObservedRunningTime="2026-01-30 00:26:02.307192909 +0000 UTC m=+943.039532128" Jan 30 00:26:03 crc kubenswrapper[5104]: I0130 00:26:03.301557 5104 generic.go:358] "Generic (PLEG): container finished" podID="9bdffee1-c1a4-4859-8c01-0e5559602fc9" containerID="331f7ec32e1f42a5317444a2dcca8af87e7ce19481e50c3dc6057bbe4f598cc6" exitCode=0 Jan 30 00:26:03 crc kubenswrapper[5104]: I0130 00:26:03.301650 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" event={"ID":"9bdffee1-c1a4-4859-8c01-0e5559602fc9","Type":"ContainerDied","Data":"331f7ec32e1f42a5317444a2dcca8af87e7ce19481e50c3dc6057bbe4f598cc6"} Jan 30 00:26:04 crc kubenswrapper[5104]: I0130 00:26:04.558535 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:04 crc kubenswrapper[5104]: I0130 00:26:04.680515 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5v5\" (UniqueName: \"kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5\") pod \"9bdffee1-c1a4-4859-8c01-0e5559602fc9\" (UID: \"9bdffee1-c1a4-4859-8c01-0e5559602fc9\") " Jan 30 00:26:04 crc kubenswrapper[5104]: I0130 00:26:04.688024 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5" (OuterVolumeSpecName: "kube-api-access-9q5v5") pod "9bdffee1-c1a4-4859-8c01-0e5559602fc9" (UID: "9bdffee1-c1a4-4859-8c01-0e5559602fc9"). InnerVolumeSpecName "kube-api-access-9q5v5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:04 crc kubenswrapper[5104]: I0130 00:26:04.782570 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9q5v5\" (UniqueName: \"kubernetes.io/projected/9bdffee1-c1a4-4859-8c01-0e5559602fc9-kube-api-access-9q5v5\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:05 crc kubenswrapper[5104]: I0130 00:26:05.317638 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" Jan 30 00:26:05 crc kubenswrapper[5104]: I0130 00:26:05.317649 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-zcdmr" event={"ID":"9bdffee1-c1a4-4859-8c01-0e5559602fc9","Type":"ContainerDied","Data":"68c5bd1f56fddc85e6ecd50c62c62f7c32aaad9dbc40c783261de26e91da01c5"} Jan 30 00:26:05 crc kubenswrapper[5104]: I0130 00:26:05.317953 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c5bd1f56fddc85e6ecd50c62c62f7c32aaad9dbc40c783261de26e91da01c5" Jan 30 00:26:05 crc kubenswrapper[5104]: I0130 00:26:05.382376 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-ns2dd"] Jan 30 00:26:05 crc kubenswrapper[5104]: I0130 00:26:05.390448 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-ns2dd"] Jan 30 00:26:06 crc kubenswrapper[5104]: I0130 00:26:06.536102 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de56b55-9735-4835-8a38-2984afa2ebb9" path="/var/lib/kubelet/pods/1de56b55-9735-4835-8a38-2984afa2ebb9/volumes" Jan 30 00:26:10 crc kubenswrapper[5104]: E0130 00:26:10.533241 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:26:21 crc kubenswrapper[5104]: I0130 00:26:21.209100 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-9lr7t_3f7789da-fc14-4144-8d2e-44a08ce5dd85/control-plane-machine-set-operator/0.log" Jan 30 00:26:21 crc kubenswrapper[5104]: I0130 00:26:21.358546 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-mh68h_ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88/machine-api-operator/0.log" Jan 30 00:26:21 crc kubenswrapper[5104]: I0130 00:26:21.363215 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-mh68h_ee7fb654-66c4-4c3d-88ff-e2f8f8b78f88/kube-rbac-proxy/0.log" Jan 30 00:26:21 crc kubenswrapper[5104]: I0130 00:26:21.400926 5104 scope.go:117] "RemoveContainer" containerID="1f9893e60cad40dd85400ca575c73eccd2a9cbf08977b2ca04b1a8a9bf1ac997" Jan 30 00:26:21 crc kubenswrapper[5104]: E0130 00:26:21.529071 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:26:33 crc kubenswrapper[5104]: I0130 00:26:33.557445 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-p7wlh_9656e442-0122-4ac2-8615-36525fbb8519/cert-manager-controller/0.log" Jan 30 00:26:33 crc kubenswrapper[5104]: I0130 00:26:33.635697 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-vdrst_0e4a8964-ca7d-4307-9369-e80c999b9155/cert-manager-cainjector/0.log" Jan 30 00:26:33 crc kubenswrapper[5104]: I0130 00:26:33.715154 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-cjc8l_f1dd233a-6ee2-4b44-af18-f22a902c2cd5/cert-manager-webhook/0.log" Jan 30 00:26:36 crc kubenswrapper[5104]: E0130 00:26:36.528832 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:26:47 crc kubenswrapper[5104]: I0130 00:26:47.175369 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-lbsz6_e4a69f62-1737-47f0-9ad8-19f3eca7ea5a/prometheus-operator/0.log" Jan 30 00:26:47 crc kubenswrapper[5104]: I0130 00:26:47.287639 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm_fc4ee410-e207-40f9-b067-488460ca04ef/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:47 crc kubenswrapper[5104]: I0130 00:26:47.354840 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn_61a1034e-23f3-433b-9f89-3887202ac67b/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:47 crc kubenswrapper[5104]: I0130 00:26:47.449660 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-492rh_0f6e653f-1b86-4e85-82ea-bd5e8962100a/operator/0.log" Jan 30 00:26:47 crc kubenswrapper[5104]: I0130 00:26:47.523740 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-kt5v4_f8751e81-7dbc-4b35-bf44-371140e56858/perses-operator/0.log" Jan 30 00:26:51 crc kubenswrapper[5104]: E0130 00:26:51.528735 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.420438 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/util/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.575102 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/util/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.584653 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/pull/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.587743 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/pull/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.753585 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/util/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.773083 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/pull/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.776479 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fmktgx_3025cc01-0b4c-401d-bdec-5fe14e497982/extract/0.log" Jan 30 00:27:01 crc kubenswrapper[5104]: I0130 00:27:01.928019 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_bfae5940-0f71-4c0a-92bc-3296f59b008c/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.083344 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_bfae5940-0f71-4c0a-92bc-3296f59b008c/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.275000 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_bfae5940-0f71-4c0a-92bc-3296f59b008c/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.412756 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.594160 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.602549 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/pull/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.622000 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/pull/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.754778 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/util/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.767505 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/pull/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.775915 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gtgqz_e4c40fe6-90cc-4975-8d16-769c0291a313/extract/0.log" Jan 30 00:27:02 crc kubenswrapper[5104]: I0130 00:27:02.903169 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/util/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.040161 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/util/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.053869 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/pull/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.075562 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/pull/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.220744 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/pull/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.226966 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/util/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.255758 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08njw49_8ae706ea-d078-41e6-86b2-7dc023d77808/extract/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.395786 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-utilities/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.580559 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-content/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.584679 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-utilities/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.590082 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-content/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.726376 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-content/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.729634 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/extract-utilities/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.870225 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rpkrd_a6285221-9433-44df-8c25-e804e3faddd1/registry-server/0.log" Jan 30 00:27:03 crc kubenswrapper[5104]: I0130 00:27:03.896312 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.045604 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.057492 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.062743 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.237106 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.237206 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.308779 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-n5spl_f14474a2-e628-439c-8bbb-981e1a035991/marketplace-operator/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.438370 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zdtm9_8c42f94f-6ae8-49c5-ba21-54fd74e3329f/registry-server/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.470071 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: E0130 00:27:04.530276 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.602441 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.602695 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.605024 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.765935 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-utilities/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.788355 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/extract-content/0.log" Jan 30 00:27:04 crc kubenswrapper[5104]: I0130 00:27:04.899071 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7h878_734416dc-0d5e-4b50-a117-bc6e9c8f92b9/registry-server/0.log" Jan 30 00:27:16 crc kubenswrapper[5104]: E0130 00:27:16.527925 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:17 crc kubenswrapper[5104]: I0130 00:27:17.269569 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56fbb5df75-dsksm_fc4ee410-e207-40f9-b067-488460ca04ef/prometheus-operator-admission-webhook/0.log" Jan 30 00:27:17 crc kubenswrapper[5104]: I0130 00:27:17.294488 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-lbsz6_e4a69f62-1737-47f0-9ad8-19f3eca7ea5a/prometheus-operator/0.log" Jan 30 00:27:17 crc kubenswrapper[5104]: I0130 00:27:17.341477 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56fbb5df75-n78zn_61a1034e-23f3-433b-9f89-3887202ac67b/prometheus-operator-admission-webhook/0.log" Jan 30 00:27:17 crc kubenswrapper[5104]: I0130 00:27:17.379907 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-492rh_0f6e653f-1b86-4e85-82ea-bd5e8962100a/operator/0.log" Jan 30 00:27:17 crc kubenswrapper[5104]: I0130 00:27:17.436479 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-kt5v4_f8751e81-7dbc-4b35-bf44-371140e56858/perses-operator/0.log" Jan 30 00:27:31 crc kubenswrapper[5104]: E0130 00:27:31.527460 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:45 crc kubenswrapper[5104]: E0130 00:27:45.767908 5104 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:27:45 crc kubenswrapper[5104]: E0130 00:27:45.768724 5104 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfpw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv_openshift-marketplace(bfae5940-0f71-4c0a-92bc-3296f59b008c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:27:45 crc kubenswrapper[5104]: E0130 00:27:45.770087 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:56 crc kubenswrapper[5104]: E0130 00:27:56.529579 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:27:58 crc kubenswrapper[5104]: I0130 00:27:58.361254 5104 generic.go:358] "Generic (PLEG): container finished" podID="6e964415-d51f-4a72-b159-79664cfded67" containerID="7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092" exitCode=0 Jan 30 00:27:58 crc kubenswrapper[5104]: I0130 00:27:58.361331 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wtf55/must-gather-95lj9" event={"ID":"6e964415-d51f-4a72-b159-79664cfded67","Type":"ContainerDied","Data":"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092"} Jan 30 00:27:58 crc kubenswrapper[5104]: I0130 00:27:58.362198 5104 scope.go:117] "RemoveContainer" containerID="7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092" Jan 30 00:27:58 crc kubenswrapper[5104]: I0130 00:27:58.489517 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wtf55_must-gather-95lj9_6e964415-d51f-4a72-b159-79664cfded67/gather/0.log" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.143351 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495548-lbdr6"] Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.144357 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bdffee1-c1a4-4859-8c01-0e5559602fc9" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.144379 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdffee1-c1a4-4859-8c01-0e5559602fc9" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.144538 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bdffee1-c1a4-4859-8c01-0e5559602fc9" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.149207 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.152631 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.153218 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.153810 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.160773 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-lbdr6"] Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.217099 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfqt\" (UniqueName: \"kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt\") pod \"auto-csr-approver-29495548-lbdr6\" (UID: \"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf\") " pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.318597 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfqt\" (UniqueName: \"kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt\") pod \"auto-csr-approver-29495548-lbdr6\" (UID: \"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf\") " pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.355366 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfqt\" (UniqueName: \"kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt\") pod \"auto-csr-approver-29495548-lbdr6\" (UID: \"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf\") " pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.480049 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:00 crc kubenswrapper[5104]: I0130 00:28:00.977116 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-lbdr6"] Jan 30 00:28:00 crc kubenswrapper[5104]: W0130 00:28:00.981456 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855e27ba_d39f_4c5d_a5c6_45ae6423ebaf.slice/crio-5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c WatchSource:0}: Error finding container 5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c: Status 404 returned error can't find the container with id 5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c Jan 30 00:28:01 crc kubenswrapper[5104]: I0130 00:28:01.385767 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" event={"ID":"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf","Type":"ContainerStarted","Data":"5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c"} Jan 30 00:28:02 crc kubenswrapper[5104]: I0130 00:28:02.394395 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" event={"ID":"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf","Type":"ContainerStarted","Data":"1125e76097756c18c5f0ced09688742e6ad4af194888a79a3ef03efe823a154b"} Jan 30 00:28:02 crc kubenswrapper[5104]: I0130 00:28:02.412193 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" podStartSLOduration=1.408864115 podStartE2EDuration="2.412173828s" podCreationTimestamp="2026-01-30 00:28:00 +0000 UTC" firstStartedPulling="2026-01-30 00:28:00.983940268 +0000 UTC m=+1061.716279497" lastFinishedPulling="2026-01-30 00:28:01.987249961 +0000 UTC m=+1062.719589210" observedRunningTime="2026-01-30 00:28:02.410288977 +0000 UTC m=+1063.142628206" watchObservedRunningTime="2026-01-30 00:28:02.412173828 +0000 UTC m=+1063.144513047" Jan 30 00:28:03 crc kubenswrapper[5104]: I0130 00:28:03.403015 5104 generic.go:358] "Generic (PLEG): container finished" podID="855e27ba-d39f-4c5d-a5c6-45ae6423ebaf" containerID="1125e76097756c18c5f0ced09688742e6ad4af194888a79a3ef03efe823a154b" exitCode=0 Jan 30 00:28:03 crc kubenswrapper[5104]: I0130 00:28:03.403309 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" event={"ID":"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf","Type":"ContainerDied","Data":"1125e76097756c18c5f0ced09688742e6ad4af194888a79a3ef03efe823a154b"} Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.615726 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wtf55/must-gather-95lj9"] Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.616497 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-wtf55/must-gather-95lj9" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="copy" containerID="cri-o://1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b" gracePeriod=2 Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.619751 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.625590 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wtf55/must-gather-95lj9"] Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.719756 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.738295 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.786676 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfqt\" (UniqueName: \"kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt\") pod \"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf\" (UID: \"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf\") " Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.794943 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt" (OuterVolumeSpecName: "kube-api-access-8hfqt") pod "855e27ba-d39f-4c5d-a5c6-45ae6423ebaf" (UID: "855e27ba-d39f-4c5d-a5c6-45ae6423ebaf"). InnerVolumeSpecName "kube-api-access-8hfqt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.888487 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8hfqt\" (UniqueName: \"kubernetes.io/projected/855e27ba-d39f-4c5d-a5c6-45ae6423ebaf-kube-api-access-8hfqt\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.990787 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wtf55_must-gather-95lj9_6e964415-d51f-4a72-b159-79664cfded67/copy/0.log" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.991440 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:28:04 crc kubenswrapper[5104]: I0130 00:28:04.993069 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.100701 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output\") pod \"6e964415-d51f-4a72-b159-79664cfded67\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.100759 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xskl\" (UniqueName: \"kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl\") pod \"6e964415-d51f-4a72-b159-79664cfded67\" (UID: \"6e964415-d51f-4a72-b159-79664cfded67\") " Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.105533 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl" (OuterVolumeSpecName: "kube-api-access-7xskl") pod "6e964415-d51f-4a72-b159-79664cfded67" (UID: "6e964415-d51f-4a72-b159-79664cfded67"). InnerVolumeSpecName "kube-api-access-7xskl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.161386 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e964415-d51f-4a72-b159-79664cfded67" (UID: "6e964415-d51f-4a72-b159-79664cfded67"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.202675 5104 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e964415-d51f-4a72-b159-79664cfded67-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.202712 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xskl\" (UniqueName: \"kubernetes.io/projected/6e964415-d51f-4a72-b159-79664cfded67-kube-api-access-7xskl\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.427948 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wtf55_must-gather-95lj9_6e964415-d51f-4a72-b159-79664cfded67/copy/0.log" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.428906 5104 scope.go:117] "RemoveContainer" containerID="1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.428937 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wtf55/must-gather-95lj9" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.428831 5104 generic.go:358] "Generic (PLEG): container finished" podID="6e964415-d51f-4a72-b159-79664cfded67" containerID="1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b" exitCode=143 Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.430734 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" event={"ID":"855e27ba-d39f-4c5d-a5c6-45ae6423ebaf","Type":"ContainerDied","Data":"5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c"} Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.430760 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5619a922c370389e5f4087745d5aa3231ef306bf3e1ffef85904f6ecac94226c" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.430796 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-lbdr6" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.436046 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.447279 5104 scope.go:117] "RemoveContainer" containerID="7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.450302 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.457883 5104 status_manager.go:895] "Failed to get status for pod" podUID="6e964415-d51f-4a72-b159-79664cfded67" pod="openshift-must-gather-wtf55/must-gather-95lj9" err="pods \"must-gather-95lj9\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wtf55\": no relationship found between node 'crc' and this object" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.464997 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-k7g2r"] Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.469712 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-k7g2r"] Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.509169 5104 scope.go:117] "RemoveContainer" containerID="1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b" Jan 30 00:28:05 crc kubenswrapper[5104]: E0130 00:28:05.509584 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b\": container with ID starting with 1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b not found: ID does not exist" containerID="1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.509613 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b"} err="failed to get container status \"1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b\": rpc error: code = NotFound desc = could not find container \"1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b\": container with ID starting with 1d4746c0ff11068498f04acadc42db73e16264a39dd84f1e451b6d2e3c26915b not found: ID does not exist" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.509631 5104 scope.go:117] "RemoveContainer" containerID="7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092" Jan 30 00:28:05 crc kubenswrapper[5104]: E0130 00:28:05.509937 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092\": container with ID starting with 7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092 not found: ID does not exist" containerID="7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092" Jan 30 00:28:05 crc kubenswrapper[5104]: I0130 00:28:05.509984 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092"} err="failed to get container status \"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092\": rpc error: code = NotFound desc = could not find container \"7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092\": container with ID starting with 7eb2bb49c6a5a6eac0f55d04840e6bc890407d6073970b58c0faaaba8787d092 not found: ID does not exist" Jan 30 00:28:06 crc kubenswrapper[5104]: I0130 00:28:06.550198 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e964415-d51f-4a72-b159-79664cfded67" path="/var/lib/kubelet/pods/6e964415-d51f-4a72-b159-79664cfded67/volumes" Jan 30 00:28:06 crc kubenswrapper[5104]: I0130 00:28:06.551109 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3744f5e-251f-466b-8b04-bee4b3c6d743" path="/var/lib/kubelet/pods/f3744f5e-251f-466b-8b04-bee4b3c6d743/volumes" Jan 30 00:28:08 crc kubenswrapper[5104]: E0130 00:28:08.529257 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:28:14 crc kubenswrapper[5104]: I0130 00:28:14.950393 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:28:14 crc kubenswrapper[5104]: I0130 00:28:14.953890 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:28:19 crc kubenswrapper[5104]: E0130 00:28:19.528916 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:28:21 crc kubenswrapper[5104]: I0130 00:28:21.526747 5104 scope.go:117] "RemoveContainer" containerID="91ea0be2dc6b0ff1e2a1c098de61d64132d3a30b9213c786579f63b5c0e824ec" Jan 30 00:28:32 crc kubenswrapper[5104]: E0130 00:28:32.528777 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:28:44 crc kubenswrapper[5104]: I0130 00:28:44.949112 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:28:44 crc kubenswrapper[5104]: I0130 00:28:44.949747 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:28:47 crc kubenswrapper[5104]: E0130 00:28:47.529191 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:29:02 crc kubenswrapper[5104]: E0130 00:29:02.528444 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:29:14 crc kubenswrapper[5104]: E0130 00:29:14.528958 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:29:14 crc kubenswrapper[5104]: I0130 00:29:14.950059 5104 patch_prober.go:28] interesting pod/machine-config-daemon-jzfxc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:14 crc kubenswrapper[5104]: I0130 00:29:14.950183 5104 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:14 crc kubenswrapper[5104]: I0130 00:29:14.950280 5104 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" Jan 30 00:29:14 crc kubenswrapper[5104]: I0130 00:29:14.951264 5104 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b7d1b8c348b48cd05f685aca263a03710f520dae93d7f497ea7c88e0035f94f"} pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:29:14 crc kubenswrapper[5104]: I0130 00:29:14.951343 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" podUID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerName="machine-config-daemon" containerID="cri-o://1b7d1b8c348b48cd05f685aca263a03710f520dae93d7f497ea7c88e0035f94f" gracePeriod=600 Jan 30 00:29:15 crc kubenswrapper[5104]: I0130 00:29:15.950907 5104 generic.go:358] "Generic (PLEG): container finished" podID="2f49b5db-a679-4eef-9bf2-8d0275caac12" containerID="1b7d1b8c348b48cd05f685aca263a03710f520dae93d7f497ea7c88e0035f94f" exitCode=0 Jan 30 00:29:15 crc kubenswrapper[5104]: I0130 00:29:15.950946 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerDied","Data":"1b7d1b8c348b48cd05f685aca263a03710f520dae93d7f497ea7c88e0035f94f"} Jan 30 00:29:15 crc kubenswrapper[5104]: I0130 00:29:15.951368 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzfxc" event={"ID":"2f49b5db-a679-4eef-9bf2-8d0275caac12","Type":"ContainerStarted","Data":"4fd1da82a1fa4aef4be64a14c9c73fd47f50a66ace63591f9e9e49f1d6f5726a"} Jan 30 00:29:15 crc kubenswrapper[5104]: I0130 00:29:15.951401 5104 scope.go:117] "RemoveContainer" containerID="c126fc7c5d040b04802a3f6d1d50a32c0a699bdd4fab7d404eb1bbdcb4462998" Jan 30 00:29:29 crc kubenswrapper[5104]: E0130 00:29:29.530116 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:29:40 crc kubenswrapper[5104]: E0130 00:29:40.540362 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:29:53 crc kubenswrapper[5104]: E0130 00:29:53.531697 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.082682 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087068 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="855e27ba-d39f-4c5d-a5c6-45ae6423ebaf" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087101 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="855e27ba-d39f-4c5d-a5c6-45ae6423ebaf" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087115 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="copy" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087123 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="copy" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087138 5104 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="gather" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087150 5104 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="gather" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087259 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="855e27ba-d39f-4c5d-a5c6-45ae6423ebaf" containerName="oc" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087275 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="gather" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.087291 5104 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e964415-d51f-4a72-b159-79664cfded67" containerName="copy" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.093609 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.094113 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.185781 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6dmd\" (UniqueName: \"kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.185879 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.185927 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.199667 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.216028 5104 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495550-wp48p"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.216269 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.220267 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.220667 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.221203 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.221301 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.221424 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-wp48p"] Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.222535 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.222727 5104 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.223641 5104 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-xh9r9\"" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.287498 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.287611 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6dmd\" (UniqueName: \"kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.287679 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.288080 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.288144 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.312697 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6dmd\" (UniqueName: \"kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd\") pod \"certified-operators-phfng\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.389329 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrlk\" (UniqueName: \"kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk\") pod \"auto-csr-approver-29495550-wp48p\" (UID: \"a797a5b0-cdcf-4304-99c2-c144c96f4c37\") " pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.389393 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7gt\" (UniqueName: \"kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.389578 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.389622 5104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.427225 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.490657 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.490698 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.490735 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrlk\" (UniqueName: \"kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk\") pod \"auto-csr-approver-29495550-wp48p\" (UID: \"a797a5b0-cdcf-4304-99c2-c144c96f4c37\") " pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.490777 5104 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7gt\" (UniqueName: \"kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.491962 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.495320 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.507285 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7gt\" (UniqueName: \"kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt\") pod \"collect-profiles-29495550-kh2b6\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.509028 5104 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrlk\" (UniqueName: \"kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk\") pod \"auto-csr-approver-29495550-wp48p\" (UID: \"a797a5b0-cdcf-4304-99c2-c144c96f4c37\") " pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.538331 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.547632 5104 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.898238 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:00 crc kubenswrapper[5104]: W0130 00:30:00.900949 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod771e303f_a5b6_4717_9330_8a23fad4189b.slice/crio-e94623f5e6bc6aa44031b0b1e9017dc02327a1e3b7ac9ba16d2bfe255e870e9f WatchSource:0}: Error finding container e94623f5e6bc6aa44031b0b1e9017dc02327a1e3b7ac9ba16d2bfe255e870e9f: Status 404 returned error can't find the container with id e94623f5e6bc6aa44031b0b1e9017dc02327a1e3b7ac9ba16d2bfe255e870e9f Jan 30 00:30:00 crc kubenswrapper[5104]: I0130 00:30:00.981142 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6"] Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.000089 5104 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-wp48p"] Jan 30 00:30:01 crc kubenswrapper[5104]: W0130 00:30:01.020730 5104 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda797a5b0_cdcf_4304_99c2_c144c96f4c37.slice/crio-063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb WatchSource:0}: Error finding container 063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb: Status 404 returned error can't find the container with id 063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.301656 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-wp48p" event={"ID":"a797a5b0-cdcf-4304-99c2-c144c96f4c37","Type":"ContainerStarted","Data":"063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb"} Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.303968 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" event={"ID":"7f51675f-1d32-458e-b3cd-b752f30c1fe1","Type":"ContainerStarted","Data":"c3cc6d3c95def9c2d94110e403ab6452e6f5a33da5d1f542f970ff19aab514a5"} Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.304019 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" event={"ID":"7f51675f-1d32-458e-b3cd-b752f30c1fe1","Type":"ContainerStarted","Data":"e763e47427c24f4e0f90d8a618b438ffa4accdb8a155b284a03268c1e247d1ae"} Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.305989 5104 generic.go:358] "Generic (PLEG): container finished" podID="771e303f-a5b6-4717-9330-8a23fad4189b" containerID="ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3" exitCode=0 Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.306149 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerDied","Data":"ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3"} Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.306191 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerStarted","Data":"e94623f5e6bc6aa44031b0b1e9017dc02327a1e3b7ac9ba16d2bfe255e870e9f"} Jan 30 00:30:01 crc kubenswrapper[5104]: I0130 00:30:01.324094 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" podStartSLOduration=1.3240721149999999 podStartE2EDuration="1.324072115s" podCreationTimestamp="2026-01-30 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:30:01.321692791 +0000 UTC m=+1182.054032020" watchObservedRunningTime="2026-01-30 00:30:01.324072115 +0000 UTC m=+1182.056411344" Jan 30 00:30:02 crc kubenswrapper[5104]: I0130 00:30:02.315239 5104 generic.go:358] "Generic (PLEG): container finished" podID="7f51675f-1d32-458e-b3cd-b752f30c1fe1" containerID="c3cc6d3c95def9c2d94110e403ab6452e6f5a33da5d1f542f970ff19aab514a5" exitCode=0 Jan 30 00:30:02 crc kubenswrapper[5104]: I0130 00:30:02.315347 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" event={"ID":"7f51675f-1d32-458e-b3cd-b752f30c1fe1","Type":"ContainerDied","Data":"c3cc6d3c95def9c2d94110e403ab6452e6f5a33da5d1f542f970ff19aab514a5"} Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.600310 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.647118 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7gt\" (UniqueName: \"kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt\") pod \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.647347 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume\") pod \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.647435 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume\") pod \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\" (UID: \"7f51675f-1d32-458e-b3cd-b752f30c1fe1\") " Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.647925 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f51675f-1d32-458e-b3cd-b752f30c1fe1" (UID: "7f51675f-1d32-458e-b3cd-b752f30c1fe1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.653764 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f51675f-1d32-458e-b3cd-b752f30c1fe1" (UID: "7f51675f-1d32-458e-b3cd-b752f30c1fe1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.654621 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt" (OuterVolumeSpecName: "kube-api-access-rn7gt") pod "7f51675f-1d32-458e-b3cd-b752f30c1fe1" (UID: "7f51675f-1d32-458e-b3cd-b752f30c1fe1"). InnerVolumeSpecName "kube-api-access-rn7gt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.749071 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rn7gt\" (UniqueName: \"kubernetes.io/projected/7f51675f-1d32-458e-b3cd-b752f30c1fe1-kube-api-access-rn7gt\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.749103 5104 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f51675f-1d32-458e-b3cd-b752f30c1fe1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5104]: I0130 00:30:03.749112 5104 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f51675f-1d32-458e-b3cd-b752f30c1fe1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.336474 5104 generic.go:358] "Generic (PLEG): container finished" podID="a797a5b0-cdcf-4304-99c2-c144c96f4c37" containerID="db1cbcb7970bc59df8c05d7cb1fd51d6c5751f980fb5a0c39343dde3e9e49a67" exitCode=0 Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.336817 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-wp48p" event={"ID":"a797a5b0-cdcf-4304-99c2-c144c96f4c37","Type":"ContainerDied","Data":"db1cbcb7970bc59df8c05d7cb1fd51d6c5751f980fb5a0c39343dde3e9e49a67"} Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.339164 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" event={"ID":"7f51675f-1d32-458e-b3cd-b752f30c1fe1","Type":"ContainerDied","Data":"e763e47427c24f4e0f90d8a618b438ffa4accdb8a155b284a03268c1e247d1ae"} Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.339238 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-kh2b6" Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.339250 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e763e47427c24f4e0f90d8a618b438ffa4accdb8a155b284a03268c1e247d1ae" Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.341568 5104 generic.go:358] "Generic (PLEG): container finished" podID="771e303f-a5b6-4717-9330-8a23fad4189b" containerID="c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83" exitCode=0 Jan 30 00:30:04 crc kubenswrapper[5104]: I0130 00:30:04.342012 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerDied","Data":"c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83"} Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.350698 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerStarted","Data":"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176"} Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.371794 5104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phfng" podStartSLOduration=3.521763477 podStartE2EDuration="5.371768104s" podCreationTimestamp="2026-01-30 00:30:00 +0000 UTC" firstStartedPulling="2026-01-30 00:30:01.306652105 +0000 UTC m=+1182.038991324" lastFinishedPulling="2026-01-30 00:30:03.156656732 +0000 UTC m=+1183.888995951" observedRunningTime="2026-01-30 00:30:05.368911947 +0000 UTC m=+1186.101251186" watchObservedRunningTime="2026-01-30 00:30:05.371768104 +0000 UTC m=+1186.104107323" Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.590392 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.676941 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrlk\" (UniqueName: \"kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk\") pod \"a797a5b0-cdcf-4304-99c2-c144c96f4c37\" (UID: \"a797a5b0-cdcf-4304-99c2-c144c96f4c37\") " Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.689006 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk" (OuterVolumeSpecName: "kube-api-access-hkrlk") pod "a797a5b0-cdcf-4304-99c2-c144c96f4c37" (UID: "a797a5b0-cdcf-4304-99c2-c144c96f4c37"). InnerVolumeSpecName "kube-api-access-hkrlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:05 crc kubenswrapper[5104]: I0130 00:30:05.778368 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkrlk\" (UniqueName: \"kubernetes.io/projected/a797a5b0-cdcf-4304-99c2-c144c96f4c37-kube-api-access-hkrlk\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:06 crc kubenswrapper[5104]: I0130 00:30:06.367102 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-wp48p" Jan 30 00:30:06 crc kubenswrapper[5104]: I0130 00:30:06.367140 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-wp48p" event={"ID":"a797a5b0-cdcf-4304-99c2-c144c96f4c37","Type":"ContainerDied","Data":"063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb"} Jan 30 00:30:06 crc kubenswrapper[5104]: I0130 00:30:06.367187 5104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063be7c9a791945db3fb883a35fe4f0d11599855e8e3ee073f5714cf29bfd3fb" Jan 30 00:30:06 crc kubenswrapper[5104]: E0130 00:30:06.527263 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:30:06 crc kubenswrapper[5104]: I0130 00:30:06.646533 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-cmvpc"] Jan 30 00:30:06 crc kubenswrapper[5104]: I0130 00:30:06.650414 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-cmvpc"] Jan 30 00:30:08 crc kubenswrapper[5104]: I0130 00:30:08.533620 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f623fd34-1c00-4bdd-8dfe-7750937fad34" path="/var/lib/kubelet/pods/f623fd34-1c00-4bdd-8dfe-7750937fad34/volumes" Jan 30 00:30:10 crc kubenswrapper[5104]: I0130 00:30:10.427660 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:10 crc kubenswrapper[5104]: I0130 00:30:10.427751 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:10 crc kubenswrapper[5104]: I0130 00:30:10.482269 5104 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:11 crc kubenswrapper[5104]: I0130 00:30:11.478693 5104 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:11 crc kubenswrapper[5104]: I0130 00:30:11.530121 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.428443 5104 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phfng" podUID="771e303f-a5b6-4717-9330-8a23fad4189b" containerName="registry-server" containerID="cri-o://ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176" gracePeriod=2 Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.882714 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.985273 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities\") pod \"771e303f-a5b6-4717-9330-8a23fad4189b\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.985335 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6dmd\" (UniqueName: \"kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd\") pod \"771e303f-a5b6-4717-9330-8a23fad4189b\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.985402 5104 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content\") pod \"771e303f-a5b6-4717-9330-8a23fad4189b\" (UID: \"771e303f-a5b6-4717-9330-8a23fad4189b\") " Jan 30 00:30:13 crc kubenswrapper[5104]: I0130 00:30:13.997993 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities" (OuterVolumeSpecName: "utilities") pod "771e303f-a5b6-4717-9330-8a23fad4189b" (UID: "771e303f-a5b6-4717-9330-8a23fad4189b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.001505 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd" (OuterVolumeSpecName: "kube-api-access-v6dmd") pod "771e303f-a5b6-4717-9330-8a23fad4189b" (UID: "771e303f-a5b6-4717-9330-8a23fad4189b"). InnerVolumeSpecName "kube-api-access-v6dmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.015101 5104 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "771e303f-a5b6-4717-9330-8a23fad4189b" (UID: "771e303f-a5b6-4717-9330-8a23fad4189b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.086805 5104 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.086891 5104 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/771e303f-a5b6-4717-9330-8a23fad4189b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.086907 5104 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6dmd\" (UniqueName: \"kubernetes.io/projected/771e303f-a5b6-4717-9330-8a23fad4189b-kube-api-access-v6dmd\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.437836 5104 generic.go:358] "Generic (PLEG): container finished" podID="771e303f-a5b6-4717-9330-8a23fad4189b" containerID="ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176" exitCode=0 Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.438017 5104 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phfng" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.438081 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerDied","Data":"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176"} Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.443518 5104 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phfng" event={"ID":"771e303f-a5b6-4717-9330-8a23fad4189b","Type":"ContainerDied","Data":"e94623f5e6bc6aa44031b0b1e9017dc02327a1e3b7ac9ba16d2bfe255e870e9f"} Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.443558 5104 scope.go:117] "RemoveContainer" containerID="ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.472998 5104 scope.go:117] "RemoveContainer" containerID="c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.482926 5104 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.489531 5104 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phfng"] Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.501415 5104 scope.go:117] "RemoveContainer" containerID="ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.536048 5104 scope.go:117] "RemoveContainer" containerID="ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176" Jan 30 00:30:14 crc kubenswrapper[5104]: E0130 00:30:14.536534 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176\": container with ID starting with ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176 not found: ID does not exist" containerID="ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.536606 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176"} err="failed to get container status \"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176\": rpc error: code = NotFound desc = could not find container \"ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176\": container with ID starting with ff48b89e229bff71c6f4c5dbbd936b3c69c1e360d29a0fb8d546538fb0ff2176 not found: ID does not exist" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.536637 5104 scope.go:117] "RemoveContainer" containerID="c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83" Jan 30 00:30:14 crc kubenswrapper[5104]: E0130 00:30:14.537294 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83\": container with ID starting with c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83 not found: ID does not exist" containerID="c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.537343 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83"} err="failed to get container status \"c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83\": rpc error: code = NotFound desc = could not find container \"c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83\": container with ID starting with c85dd57eaafef3ce95373c8a3a49458cdb8080e8099dd69143e4b662220ddd83 not found: ID does not exist" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.537373 5104 scope.go:117] "RemoveContainer" containerID="ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3" Jan 30 00:30:14 crc kubenswrapper[5104]: E0130 00:30:14.537755 5104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3\": container with ID starting with ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3 not found: ID does not exist" containerID="ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.537792 5104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3"} err="failed to get container status \"ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3\": rpc error: code = NotFound desc = could not find container \"ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3\": container with ID starting with ef865aa091bfb30522fdf21d5653dc60fa0ef1333b92675593e0629c5bd3ffd3 not found: ID does not exist" Jan 30 00:30:14 crc kubenswrapper[5104]: I0130 00:30:14.540044 5104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771e303f-a5b6-4717-9330-8a23fad4189b" path="/var/lib/kubelet/pods/771e303f-a5b6-4717-9330-8a23fad4189b/volumes" Jan 30 00:30:20 crc kubenswrapper[5104]: E0130 00:30:20.535903 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:30:20 crc kubenswrapper[5104]: I0130 00:30:20.983124 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:30:20 crc kubenswrapper[5104]: I0130 00:30:20.983363 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bk79c_3b9f4a3b-9dc1-4490-849c-bb7c617a9d8f/kube-multus/0.log" Jan 30 00:30:20 crc kubenswrapper[5104]: I0130 00:30:20.995277 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:30:20 crc kubenswrapper[5104]: I0130 00:30:20.995290 5104 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:30:21 crc kubenswrapper[5104]: I0130 00:30:21.658244 5104 scope.go:117] "RemoveContainer" containerID="00d35aced4bfdc93574e022fb435b909bcfe2d35d2b1a8b805e0ddeb01d1935f" Jan 30 00:30:34 crc kubenswrapper[5104]: E0130 00:30:34.529488 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:30:46 crc kubenswrapper[5104]: E0130 00:30:46.529311 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:30:58 crc kubenswrapper[5104]: E0130 00:30:58.529169 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:31:12 crc kubenswrapper[5104]: I0130 00:31:12.529593 5104 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:31:12 crc kubenswrapper[5104]: E0130 00:31:12.530391 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" Jan 30 00:31:23 crc kubenswrapper[5104]: E0130 00:31:23.527934 5104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ektxjv" podUID="bfae5940-0f71-4c0a-92bc-3296f59b008c" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136775751024466 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136775752017404 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136772741016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136772741015472 5ustar corecore